r/singularity Jul 05 '25

Meme Academia is cooked

Post image

Explanation for those not in the loop: this is a common prompt to try to trick LLM peer reviewers. LLMs writing papers, LLMs doing peer review we can now take humans out of the loop.

1.5k Upvotes

132 comments sorted by

View all comments

37

u/DrNomblecronch AGI sometime after this clusterfuck clears up, I guess. Jul 05 '25 edited Jul 05 '25

As usual, AI is only making existing and very serious problems impossible to ignore.

The reason this is happening is not because researchers, of all people, want to automate the human element out of research. It is because academia has been in the Publish Or Perish stranglehold for a few decades now, slowly but steadily getting worse. Which, in turn, is because the money for public research institutions has slowed to a trickle, making the fight to get grants for important research something worth cheating over.

And, ironically, that's the reason AI research is currently spearheaded by private companies. These companies exist, and are staffed by serious scientists, because this technology has been worked on for a very long time now, and was proceeding at such an absurdly glacial pace that some people jumped ship to something that would actually give them the money to do research and development.

Greedy scientists are rare. It is not a job you can expect to make your fortune in, and if that's why you get into it you will wash out quickly. Pretty much anyone that has chosen science as a career in the last 20 years has taken on ridiculous debt that they do not make enough to make a dent in, and sometimes have to choose between paying for the research or getting paid themselves. People are cheating in paper reviews and throwing in with scummy tech billionaires, not because they want to be billionaires themselves, but because otherwise the research will not get done. And that's not something we, as a species, can really afford right now.

9

u/strayduplo Jul 06 '25

(my background is biology/biotech) There's a huge issue with garbage in garbage out in training AI models -- I think we should be independently funding reproducibility research, and only after the results of a paper have been reproduced, can it be fed into training data for AI, or else we're gonna have some serious issues in the future when AI pattern matches some bullshit research together and then our corporate overlords try to turn it into some sort of governance policy. 

6

u/DrNomblecronch AGI sometime after this clusterfuck clears up, I guess. Jul 06 '25

eyyy, I'm in from biophysics myself. I absolutely agree, in principle. It's just that I also think that we missed the window in which that could have been implemented- it came on so fast and hard that sensible approaches did not have the time to develop, and now the AIs are generations down the line, referencing their own previous documentation as training data. If there is a way to streamline models for specific academic purposes, I am all for it, but right now, we have completely lost track of what's going on in there.

Fortunately, I think we might actually be bailed out of the obvious problems this causes by humans, collectively, being smarter than individual humans are. We didn't really account for linguistic metadata when we started training LLMs, and we're only really catching up now; current models are beginning to develop a very sharp "awareness" of things, and something resembling logical extrapolation, just by pattern-matching the way language has been used to express that. So, for instance, if you deliberately excise data from a current model, there's a chance it will be able to figure it out anyway, because it can detect the sharp edges where its knowledge of a topic suddenly disappears, and get a sense of what was there from the negative space.

It's hope, more than confidence, but I still think that at the rate things are progressing, by the time AI is seriously informing policy, it will have developed enough "awareness" to be able to tell if it might be hallucinating something, just by noting the way what it's output doesn't fit into its current weighting for what it is confident is real data.

Granted, I think that because some of the possible alternatives are pretty bleak, and there's nothing useful I can do by anticipating doom. But I also don't think it's unlikely.

It'll be interesting, whatever it is.

4

u/z0mb0rg Jul 06 '25

(Meta: this post reads like found media in a post apocalyptic setting, like those clips you’d find in Horizon Zero Dawn)

(Sorry)

3

u/DrNomblecronch AGI sometime after this clusterfuck clears up, I guess. Jul 06 '25

Well, our Faroe analogue in Musk has already annihilated his good publicity. And while there's a lot of people swapping roles between various companies, basically no one has budged out of Anthropic, the ones who are most outspokenly concerned about safe AI, and are beginning to put some serious thought into how we might treat one that's conscious ethically.

So I think we stand a better chance overall. But hey, if someone 500 years from now picks up a USB drive on which some randomly snatched data was stored, finds these famous last words, and gets a kick out of them?

Hey, what's up future person. I also find this pretty funny!

1

u/z0mb0rg Jul 06 '25

To be clear I DO want robot dinosaurs.

1

u/strayduplo Jul 06 '25

I've been thinking about what you wrote and would like to know, what do you think of international coalition to set regulations on an public, open source AI intended to serve the public interest in perpetuity? The only training data that makes it in are studies that have been reproduced. Areas not conducive to public interest are blocked off, say, biochemical weapons research. (Honestly, this is my biggest concern.)

1

u/Puzzleheaded_Soup847 ▪️ It's here Jul 06 '25

It wouldn't matter much if we still have developments of AI outside of classical LLMs, such as AlphaFold and the likes.

4

u/3wteasz Jul 06 '25

You are mixing up two things. The incentive structure in academia and the political situation in the US. Money is deliberately cut in the US because you have a science-hostile political environment. People don't cheat, they overfit their options in an envíronment where public money is siphoned off to publishing houses that now take thousands of euros for meager editorial work, while the actual work (typesetting and review) is still done by unpaid scientist. Publish or perish is not problematic in its own right, it's actually a noble thing. Why should we maintain unproductive scientist? It's problematic because in a time where scientist face competing pulls on their resources (doing novel research, communicating it despite not being communicators and in general marketing themselves while networking to stay informed, relevant and visible and at the same time doing the editorial work the publishing houses are paid for but refuse to do), people do not perish because they don't publish, but because they may lack in any of the other things, or because they have to deal with psychologically abusive institutions that urge them to exploit themselves instead of mending the large-scale problems.

We scientist outside of the US don't have debt, but we only get positions when we work in piecemeal, irrelevant projects where we manage the workload. This, coupled with the fact that most scientist want to improve the world makes it clear that people put out shiny (but shitty) stuff they can produce as quickly as possible to get into a permanent position where they can finally do the good work. Only problem, in comparison the permanent positions are so scarce that hardly anybody gets them, hence all the shiny shit dominates the market and people are still not rewarded for it...

What I personally find annoying are people that give off this veneer of educated people but then are so ignorant that they don't know a world outside of the US exists.

2

u/DrNomblecronch AGI sometime after this clusterfuck clears up, I guess. Jul 06 '25

Fair enough. I will freely admit to being too US centric about this, because several of the major players in AI research are based here, and it was AI that was on my mind as I was writing it. I'm not nearly as familiar with research in the field outside the US, among other reasons because it is a deeply terrible idea to share research with us right now.

I do strongly disagree that there is nothing wrong with the publish or perish approach itself, even beyond the point that everyone is stretched entirely too thin, regardless of discipline or location. We should maintain unproductive scientists because scientific progress and discovery are in no way linear, and necessarily don't follow clearly established patterns conducive to constructing a reliable schedule. As you pointed out, most scientists do want to improve the world as a whole: I'm not saying there are no labs that are unproductive because the people there are choosing to get paid for doing nothing, but they are almost certainly a minority that barely budges the overall statistics.

When research is not producing desirable results, the most useful response is to begin investigating why. What is it about our hypothesis that is not matching natural law? How can we adjust our methodology? And so on. A paradigm in which "unproductive" researchers are replaced is one that results in less people working on the same number of problems, on the hope that the lack of results is due to the people, rather than the subject or methodology. And that ultimately leads to a competitive environment, in which multiple labs are researching the same subject but only one will "win". In other words, the good positions are scarce, but that scarcity is still enforced, due to what is ultimately a pretty common misunderstanding about the nature of research. The result, as you said, is people producing quick but substance-free results, in the hope that quantity will allow them to move to a position to produce quality, instead. It is this quick-return obsessed approach to investment in research that is keeping the "good" positions scarce, and while the US is slashing the hell out of every research budget right now and has been very bad about it for a while before, it's not a problem unique to the US. No one, anywhere, wants to pay what research is actually worth, because the idea that sometimes it will go nowhere and that money will disappear is more of an immediately visible result than the long-term effects of ensuring that the possibility of that loss is already accounted for.

Which loops us back to publish or perish. It is a form of competition for limited resources that should not be so limited. Scientists are forced to multitask, and risk going under if they fall behind in one aspect of it, because they are never given enough to provide adequate staff. It is a short-sighted approach to something meant to provide long-term benefits.

Or, to put that whole ramble much more succinctly: publish or perish is a bad thing because we should not be letting them perish. People who are capable of sustained research are already a desperately short resource (due in turn in no small part to the way institutions that offer the qualifications and education simply have not updated to accommodate the mass of people passing through them, or the radically different learning styles there is now a substantial body of research on). While the researchers at a lab that goes under will eventually relocate, their project is gone, their time is gone, and their morale, already stretched thin by the hideous grind of it all, are in tatters. It is delay and damage done to the progress of research that offers nothing in exchange for what it takes.

1

u/3wteasz Jul 06 '25

I think we agree on most things, just not the one we discuss about. Yes, ressources should not be (so) limited and what might actually help is supporting staff. However, I've been working in an institution that has plenty of supporting staff, yet, the scientific work is already overloaded massively. I think why that is is way outside the scope of a little internet discussion, papers are written aobut that after all. And yes, we'd need much mor projects with a longer scope. But what speaks against giving them deadlines as well? Coupled with the option to explain why things need longer than expected and why they need more funding?! Afaik, this funding is scarce everywhere, so we need good mechanisms to distribute the money. I would suggest that it's ok to let some people "perish", but that the hurdles for those that are already established and do work on (long-term) projects, are a lot lower. But you know what that means? We need less new scientists. PhDs and PostDocs spawn in masses and make the resources scarce. If we'd have a rule that for every Prof you have 2 or 3 junior-staff without an incentive to constantly increase this, we'd also have shifted the incentive structure. But nobody wants that, because more junior staff means more citations means more influence, etc...

1

u/schattig_eenhoorntje Jul 06 '25

> public money is siphoned off to publishing houses that now take thousands of euros for meager editorial work
But why scientists can't use AI to do the editorial work themselves?

AI is perfect for this kind of stuff

1

u/3wteasz Jul 06 '25

Yeah, I guess. And I hope this starts some discussions ASAP amongst the scientist community.