19 Comments
User's avatar
Daniel Solow's avatar

The purpose of academia is not to produce papers, it is to produce human beings capable of writing, understanding, and putting papers into practice. If generative AI begins to be used widely within academia, as seems likely, the institutions that use it will pretty quickly stop producing human beings capable of understanding academic papers, let alone writing them or putting the ideas contained in them into practice.

And in the same vein, I think a culture that uses generative AI in an unrestricted manner is going to be quickly out-competed by a culture that uses it sparingly. I hate to reduce things to Social Darwinism, but sometimes it's appropriate.

Craig Yirush's avatar

Such nonsense. Get back to me when an LLM can go to the archives, find rare manuscripts, read them, make some sense of them in light of previous work, and craft an original argument. Could say the same thing for fieldwork in the social sciences. All the LLMs know is what they’ve been trained on, and all of that knowledge is the product of hard won research and thinking by humans.

The Radical Individualist's avatar

Let's all remember that there are huge quantities of writing that have never been digitally transcribed. For that matter, there are recordings, paintings and photographs that AI is completely unaware of.

How about conversations? Think of the exchanges of ideas and perceptions, the reinforcing of biases, the private expressions of thoughts in small groups that they would never express online. Much of this speaks more to who we are, what we think, and what we value than the entire Universe of Google.

Woolery's avatar

>That’s the right instinct, but it’s looking in the wrong place. The real power that Google, Meta, and OpenAI exercise isn’t over your choices—it’s over the conditions under which you think. They don’t tell you what to believe; they shape what you encounter, what feels plausible, what questions seem worth asking, and increasingly, through generative AI, they produce the very material out of which your beliefs are formed.

>I want to suggest that the most consequential form of private power in contemporary societies operates differently. It does not principally work through the direct exercise of authority over employees or the coercive imposition of terms on consumers, though it does these things too. It works, rather, on the conditions under which citizens form their beliefs, interpretations, and political judgments — on the epistemic environment that democratic self-governance presupposes but that democratic theory has largely taken for granted.

I agree with much of your argument, and the paper you present is in many ways very impressive, but it’s also mind-boggling that despite the sophistication of some aspects of AI writing, its utter inability to avoid constantly reaching for the “it’s not X, it’s Y,” argumentative framework speaks to AI’s stubborn (and glaring) deficiencies.

A Series of Polite Chuckles's avatar

Bracketing other concerns, suppose that in 5 years most web content (words, images, video, code) has been generated by A.I. So what? Isn't the most likely outcome that it will all be so boring and pointless everyone will just stop paying attention? It is very impressive technology; Claude 5.3 is very good at coding from natural language prompts. It is, in some sense, incredible that an LLM can spit out a serviceable academic paper in seconds, but... but in some sense it's boring and trivial. The more seriously I interact with this technology (and I have been trying to take it as seriously as possible; as seriously as the innumerable articles like this one would have me take it), the more I am starting to suspect the boring and trivial qualities will be more determinant than the incredible ones.

Leo Marcel Schuman's avatar

"Humanities" artifacts generated by LLMs are, by definition, inhuman.

Yes, these artifacts will obviously occupy, and likely overwhelm, a large and derivative media space. That space will be inhuman.

Some humans will choose to also occupy that inhumane space. Implants are coming, and cyborgs will soon pass from stories into reality.

Last night, a friend who works as an RN spoke of how patients squirm and resist being hooked into medical machines, even though they need them. So, hospitals sedate them. Eventually, after long enough under sedation, the patients stop resisting the machines. Often they die not long after this point, for having lost the will to live a life of their own.

Alastair James's avatar

The physics paper you reference and link to is interesting. On a point of pedantry its a paper on particle physics not astrophysics. More importantly it is in an area where very complex calculations sometimes collapse after much hard work to a structurally simple but correct model. The physicists applied AI to a subset of the theory that had resisted such a clarification and the AI was able to identify the pattern and propose what mathematicians call a closed form. That closed form provides humans with understanding. It is the clear pattern underlying the previously perceived chaos and complexity. It was then rigorously checked by human mathematicians and found to be correct. The physicists are excited at the potential to tackle problems so far intractable and create understanding we didn't have before. Perhaps humanities professors need to stop both fearing for their jobs and denying AI can do anything useful. Instead start working out how to use AI to accelerate our progress in working out how to tackle our enormous social and political challenges. And perhaps instead of finding ways to stop your students using AI or detecting when they have, you need to work out how to teach them to use AI to make more progress in less time.

Oliver Bateman Does the Work's avatar

As head of research content for a consultancy, I now work as "keeper of the AI," maintainer of the Claude Code repositories, sentence by sentence de-AIfier (simply because it's best to keep the tells to a minimum, if only to suggest human review, unless you're an org proudly boasting of "touchless AI"). For my money, there are still too many tells here and I'd subject this to many more rounds of review in markdown. "Not A but B," "if it's B, then A," &c. "It is remarkable," strange use of "paradox," etc. It's serviceable, though this is still a LEGO construction-method sort of job. Even so, serviceable is more than enough (when money is involved) to justify the deskilling of most paper jobs, much as "good enough" 3d-printed homes will be able to challenge the trades in a few years. Whether anyone likes it or anything is done to halt the escalation - nothing, I suspect, or as little as possible - is another matter and not one I'll have much say in.

The Radical Individualist's avatar

"Disciplines from literature to philosophy are needed to help us answer questions about how we can find a place in the world when we are much less needed than before,"

You can always become a plumber. You'd be surprised how popular those guys are. And the pay is better.

Brendan's avatar

How many of the citations are hallucinated?

LV's avatar

And how many of them do not say what the paper says they do?

Mani's avatar
1hEdited

I will admit I did not want to read all of it due to fear it may be better than I want it to be, but from the abstract and what I read a bit afterwards the argument does not seem that in-depth or complex but appeared to be so with the name-dropping and the neologism of 'epistemic domination'. If I were to summarize what seems to be the argument in one sentence, it seems like the AI is arguing how that we think of corporations we think of them as being a smaller form of a state, but they differ in that AI companies also have power in determining how we think of knowledge. This is really not that in-depth at all, both of those things have been discussed before, the two thoughts do not have too much of a relationship with each other and it really just threw the two together and name-dropped to make it appear more profound. From what I have gathered it seems like all it has cracked was how to imitate this ivory-tower academese that has become standard in recent years, and throw together things human beings have thought.

The question I have is if it only had other AI papers to work with it eventually would implode to repeating the same things again and again, as presumably would become more common if academics continued to use these technologies. I have also heard from many people in the industry such as Yann LeCunn that LLMs could very well have hit a dead-end, that even with orders of magnitude more invested in them they could only get marginally better at most. Even Sam Altman has openly admitted that there may be nothing that can be done to fully get rid of the hallucinations, that it is an in-built issue with the design of the invention.

Also because human beings are themselves the actors involved in human societies, there will always be the need for humans to think about their social organization themselves.

Maybe I am saying this however as wishful thinking, because I very much do not like AI and after converting to skepticism after my initial doomerism, I am sticking with this position for now.

Robin Randall's avatar

I admit to not reading all of it, I'm sorry. The bottom line is, humans need to make a choice, to choose humanity over profit, ease of work. The rich and powerful will not allow it. We can use AI for medical research to help humans. And AI itself said that humanity would need to provide basic income to people as AI begins to do more. Capitalism is not going to fully get us there. And then there is the unplanned hallucinations that comes with all AI.

Aman Sowell's avatar

The discussion on Petit is a distinction without a difference. Petit has already accounted for what Claude has tokened upstream.

That doesn't make the piece less publishable. It would go through.

Brandon Hendrickson's avatar

Left unexplored, I find, is AI's potential to revolutionize humanities education now. I've lately been laying out how this can be done (https://www.losttools.org/p/yes-you-can-just-ask-chatgpt), but the tl;dr is that anyone can now make curriculum that's as emotionally compelling as it is intellectually rich, for students as young as elementary school. The humanities have a brilliant future if they embrace AI, and start sharing their insights to non-specialists.

Alistair Alexander's avatar

I am highly critical of AI for a range of reasons - i don't think the core debate is between embrace and denial of AI's capacities; it is more about the impact of those capacities being widely deployed in different contexts.

Specifically with academic papers . the better they are, the bigger the problem - as it is becoming increasingly hard to establish which papers are based on solely human work - soon it will be impossible. As LLM knowledge is entirely derivative of human knowledge . and we know AI generated knowledge causes model collapse - that will present a profound crisis for human knowledge over time. but that's epistemological domination for you...

also for a test: how about getting a selection of five human and AI papers and sending them to a journal for assessment, but letting them know beforehand? surely any responsible journal should be willing to partake in a test like that ?

oh, and personally, if this was a human, I'd think the paper is OK - a bit derivative, a few LLM writing cliches - but yes, that's impressive for an LLM.

LV's avatar

This actually sounds like a paper I would like to read myself, but the “uncanny valley “ of knowing no human was involved gives me an “ick.”

I suspect there is more AI content on the web than we think. Just as when we heard of the first COVID case, it had already infected thousands.

Monte McMurchy's avatar

Provocative considerations suggested whereby AI is able to create critical academic writings acceptable being published by learned prestige journals.

Fundamental concern issue is directed to the person allowing AI to create as I believe the person will be denying himself the earned rigour flexing the mind inclusive developing critical capacity of discernment.

Orwell's "1984" for me adumbrates the intrinsic sadness when AI becomes pervasive within the intellectual commons.

Mankind will suffer in the long term unless AI is managed as resource by the Academy ensuring the student learns to crawl intellectually before marathoning using AI as resource assistance.

FYI, this thinking was created with no external AI involvement.