117 Comments
User's avatar
Daniel Solow's avatar

The purpose of academia is not to produce papers, it is to produce human beings capable of writing, understanding, and putting papers into practice. If generative AI begins to be used widely within academia, as seems likely, the institutions that use it will pretty quickly stop producing human beings capable of understanding academic papers, let alone writing them or putting the ideas contained in them into practice.

And in the same vein, I think a culture that uses generative AI in an unrestricted manner is going to be quickly out-competed by a culture that uses it sparingly. I hate to reduce things to Social Darwinism, but sometimes it's appropriate.

valentin.limoges's avatar

“The purpose of academia is not to produce papers”, well it should not be, but 4 years of PhD have taught me that this lofty (?) ideal has nothing to do whatsoever with the lived practice of “publish or perish”. Paradoxically, if AI can shit out mediocre papers, we might actually see better human papers, because mediocrity just isn’t worth publishing anymore.

Edogawa Ranpo's avatar

The Kafkaesque purpose of graduate school in my experience was to write grant proposals!

Slush's avatar

Ah yes, the classic. To improve something we'll shit it up so much that it might as well be dead.

We have finally falled so low that we've reached primal era tactics for improving scholarship. I'm willing to bet cavemen followed similar thought patterns when dealing with the sick. Someone's sick? Cut him apart till he bleeds to death. Maybe he'll get better after that

Dr. Harding's avatar

I think this is precisely right. AI is a threat to how much of academia works today, but we should also take it as pointing out something problematic. Academia today is much too focused on producing large quantities of papers that don't even add substantially to discussions. We need to find out what there is that humans are better at. This appears to include understanding and judgement, including judging the quality and relevance of arguments and ideas, which is now becoming a more important part of creativity than merely coming up with new combinations.

the modern betsy's avatar

agreed. people will still need to facilitate the process, face to face. also, i also think we will begin another golden age of the live arts because those who can- act, sing, recite poetry, write on paper, sew, play drums, paint & dance- will.

Ela's avatar

Exactly. There are only so many papers a person can read in a week or a month. AI can produce volumes that are impossible for anyone to absorb. There still has to be a person who decides which topic to concentrate on and who assumes responsibility for the idea. The goal of a paper isn’t just to demonstrate competence in a field — it’s to inform and influence readers, spark debate, and, ultimately, help change the world.

Craig Yirush's avatar

Such nonsense. Get back to me when an LLM can go to the archives, find rare manuscripts, read them, make some sense of them in light of previous work, and craft an original argument. Could say the same thing for fieldwork in the social sciences. All the LLMs know is what they’ve been trained on, and all of that knowledge is the product of hard won research and thinking by humans.

The Radical Individualist's avatar

Let's all remember that there are huge quantities of writing that have never been digitally transcribed. For that matter, there are recordings, paintings and photographs that AI is completely unaware of.

How about conversations? Think of the exchanges of ideas and perceptions, the reinforcing of biases, the private expressions of thoughts in small groups that they would never express online. Much of this speaks more to who we are, what we think, and what we value than the entire Universe of Google.

Stephen Dause's avatar

> go to the archives, find rare manuscripts

Yes, LLMs are not robots.

> read them, make some sense of them in light of previous work

Once digitized, they can definitely do this.

> craft an original argument

Is the argument re: corporation's epistemic dominance not original? What counts as "original"?

> all of that knowledge is the product of hard won research and thinking by humans.

Certainly true, and this is something that I think gets lost too often in the debates surrounding AI. It's also something which I don't think current law is equipped to address.

Craig Yirush's avatar

Oh, so they are not robots and thus have zero ability to do research of any kind, but the OP (to whom I am responding not you), can say hyperbolic things like ‘AI can write research papers’. Why are you defending such obvious bs? Ps are you claiming these stochastic parrots can understand all the nuances of a complex academic debate and make an original argument (i.e. say something new)?

Janet Salmons PhD's avatar

Exactly. Get back to me when they can go into the archives to read original sources that haven't been digitized. HUMANities!

Craig Yirush's avatar

Yes, before I go to the archives I always make sure I have the documents in my vector database.

Sam's avatar

Its not that far off, thats basically what RAG does now. All you have to do is raise the memory allocated and make sure it has the documents in a vector database.

THPacis's avatar

The premise here is all wrong. Claude did not write this paper. *You* wrote it, with Claude's immense help. It increased your productivity by several orders of magnitude, and that is a revolution, to be sure! But, and this is the crucial point, you are selling your expertise short! What you did as you describe your process - the back and forth, from the choice of the "better" of the five initial topic suggestions (how did you know which is more promising?) to the prompting to chose Mill's letters and not chapter 2 of on liberty etc. etc. all of that is your expertise at work (indeed, even the knowledge of which is the leading journal in the field on which to train Claude in the first place...). In short, Claude allows *you* to write a publishable article in your field in a couple of hours, but you will not be able to do so in a field foreign to you. Expertise is still needed. What we have here is a huge productivity gain for those who are already experts, and a crisis in training future experts. That is not the same thing as saying we don't need future experts, or that research isn't important or need to be de-emphasized or that it is now something anyone can just do.

PDM's avatar

I want to second this. Weird that Mounk just kind of glossed over the fact that his knowledge and expertise guided the essay. Take someone who knows nothing about the topic and little experience with research and the outcome would be very different.

Dominika's avatar

Exactly my thoughts from the beginning. Using AI as some kind of meta advanced Google with some help to go through immense sources of data - still, as an analyst who read hundreds of papers and lived some experiences (for me political science is not just about the books and articles; it's about visiting places, talking to people etc.) can only verify the relevance. As long as you really are proficient in your field for years, you can train AI like your own student to assist in some ways, still, keeping your originality and opinions close to yourself.

Woolery's avatar

>That’s the right instinct, but it’s looking in the wrong place. The real power that Google, Meta, and OpenAI exercise isn’t over your choices—it’s over the conditions under which you think. They don’t tell you what to believe; they shape what you encounter, what feels plausible, what questions seem worth asking, and increasingly, through generative AI, they produce the very material out of which your beliefs are formed.

>I want to suggest that the most consequential form of private power in contemporary societies operates differently. It does not principally work through the direct exercise of authority over employees or the coercive imposition of terms on consumers, though it does these things too. It works, rather, on the conditions under which citizens form their beliefs, interpretations, and political judgments — on the epistemic environment that democratic self-governance presupposes but that democratic theory has largely taken for granted.

I agree with much of your argument, and the paper you present is in many ways very impressive, but despite the sophistication of some aspects of AI writing, its utter inability to avoid constantly reaching for the “it’s not X, it’s Y,” argumentative framework also speaks to AI’s stubborn (and glaring) deficiencies.

Alastair James's avatar

The physics paper you reference and link to is interesting. On a point of pedantry its a paper on particle physics not astrophysics. More importantly it is in an area where very complex calculations sometimes collapse after much hard work to a structurally simple but correct model. The physicists applied AI to a subset of the theory that had resisted such a clarification and the AI was able to identify the pattern and propose what mathematicians call a closed form. That closed form provides humans with understanding. It is the clear pattern underlying the previously perceived chaos and complexity. It was then rigorously checked by human mathematicians and found to be correct. The physicists are excited at the potential to tackle problems so far intractable and create understanding we didn't have before. Perhaps humanities professors need to stop both fearing for their jobs and denying AI can do anything useful. Instead start working out how to use AI to accelerate our progress in working out how to tackle our enormous social and political challenges. And perhaps instead of finding ways to stop your students using AI or detecting when they have, you need to work out how to teach them to use AI to make more progress in less time.

Daniel Solow's avatar

When you talk about "acceleration" and"more progress in less time," it sounds like tech-zombie buzzwords. I don't think that "bigger, faster, quicker" mindset is helpful in academia, especially not the humanities.

Alastair James's avatar

I asked Google Gemini for examples of where humanities scholars are already using AI to help their work. Here is a summary of the cases we discussed:

​1. Scaling Historical Empathy: Living with Machines (UK)

​A collaboration between The Alan Turing Institute and the British Library uses AI to analyze millions of pages of 19th-century newspapers and census records. It tracks the "linguistic fingerprints" of ordinary workers during the first Industrial Revolution. By automating the "grunt work" of reading archives that would take lifetimes to finish, historians can now synthesize how society actually felt about mechanization as it happened, providing a blueprint for how we might navigate the current AI revolution.

​2. Mapping Intellectual Networks: The Republic of Letters (Stanford)

​Scholars used AI to process and visualize over 55,000 letters exchanged between Enlightenment thinkers like Voltaire, Locke, and Franklin. The "acceleration" of data processing revealed that the Enlightenment was far more decentralized and global than previously thought. The AI didn't replace the scholar; it provided a high-resolution map that showed them exactly which archives were most significant for "slow," deep reading.

​3. Solving the "Deliberative Bottleneck": The Habermas Machine & Polis

​In political philosophy, the ideal of "deliberative democracy" (proposed by thinkers like Jürgen Habermas) has always struggled with the problem of scale—how do you get millions of people to talk to each other constructively?

​Polis: An AI tool used in Taiwan and the UK to map the opinions of thousands of citizens simultaneously. It identifies "consensus clusters"—areas where polarized groups actually agree.

​The Habermas Machine: A recent AI experiment that synthesizes diverse perspectives into a "common ground" statement, successfully reducing polarization in citizen assemblies.

​4. Stress-Testing Theories of Justice

​Philosophers are now using "Generative Agents" (AI personas grounded in real-world demographic data) to simulate how different social contracts or "thought experiments" might play out in a large, complex population. This moves political philosophy from purely theoretical intuition to evidence-based modeling, allowing us to test if a policy is truly "just" before it is implemented.

Our entire conversation is here: https://g.co/gemini/share/d50b65f889d0

Better Days Are A Toenail Away's avatar

Why the hell would I want to read what Claude or Chad Gupta (ChatGPT) thinks of something?

I don't really care *if* an LLM can produce an argument. I'm not interested in reading a Humanities paper not written by a human.

Doctor Mist's avatar

That strikes me as a weird reaction. Maybe it’s because my background is STEM rather than Humanities, but I would say that the validity of an argument is more important than its source. If a paper presents a logical argument that gives you a new insight about the world, what do you care where it came from?

I’ll grant that it is at least disconcerting to reflect that AI might increase the productivity of academics so profoundly that humans don’t have time to benefit from all the new insights thus produced. And perhaps the insight in this particular paper was not so earth-shaking; I’m not qualified to judge. But demanding to know the source of those insights before letting yourself profit from them seems like burying your head in the sand.

Better Days Are A Toenail Away's avatar

In the Humanities there is no one right answer. There is always a more right answer. That's why you can never get 100% in a course or on a paper.

Since LLM's generative models are trained on existing writing, it's not going to come up with anything truly "new" anyway.

I'm not going to spend the short time I have on this earth reading A.I.-generated arguments. Not when there's so much other stuff I want to read and nowhere near enough time to read it all. If that's "burying my sand in the sand," then I'm a proud ostrich.

Doctor Mist's avatar

Since humanities students are trained on existing writing, are they not going to come up with anything truly "new" either? I'm open to the possibility that LLMs will plateau at the "humanities student" level rather than at the John Stuart Mill level, though I don't see their reliance on existing texts as a very good argument for that proposition, since that's how everybody is trained. We don't have genes for writing humanities papers.

I'll grant that your heuristic is probably pretty sound at the moment. There's certainly more good stuff out there that I want to read than I will have time for even if writers stopped writing today. Most of what AI writes is pretty sloppy. Now. But they keep surprising me, doing things I considered impossible two years ago, or even today given my understanding of the ideas behind LLMs. (The particle-physics example Yascha pointed to blew me away.)

If your colleagues pointed you to a paper that they considered a profoundly "more right answer", would you refuse to read it because it was by an AI? Why?

Better Days Are A Toenail Away's avatar

Look, I'm sure that any argument I come up with can be swatted down by any first-year philosophy major. I'm not that good at articulating my discomfort with A.I.

I'm just not that psyched about a technology that has been developed to replace humans, especially in artistic fields. Advocates of A.I. keep saying it wasn't built for that reason, but that's not really the point. It is going to replace humans in academic and creative fields, whether or not that was the intended effect.

If film studios and publishing houses can make more profit selling movies and books by hiring non-human writers, who never get sick and won't unionize and don't need to get paid and can produce more work faster, that's what they’re going to do.

I'm a full-time mechanic but I'm also writer and a musician (in both cases completely obscure). Most places I go, both online and in meatspace, people are waving A.I. at me like a severed head. It's cultic and creepy.

If I said “so far, the hype around A.I. has not justified what it has delivered.” I'm sure you could point to some amazing things A.I. has done.

To me, it's made the internet a much shittier place to hang out and it's made customer service much more frustrating. That's about the extent of my engagement with it so far.

I will concede your point. If I were writing a college paper needed to bolster my argument by citing an A.I.-written paper, I would do it. I just wouldn't be psyched about it.

A small part of me is astonished that I'm arguing for the Humanities remaining a discipline where human beings talk to each other and exchange culture. The other part of me has been overcome with nihilistic despair, which will hopefully prepare me for our A.I.-dominated future where we all lie in bed doomscrolling while collecting UBI and having food delivered.

Doctor Mist's avatar

I ain’t saying you’re wrong to be worried. I am too. I think the potential for upending society from this is even larger than for either the agricultural revolution or the industrial revolution. But the upside potential is pretty great, too. And in any event I don’t think there’s any stopping it; better to stay informed so as to shift the trajectory slightly if and when it’s needed and possible.

The industrial revolution is largely what made it possible to end slavery, and our descendants may find it nearly as incomprehensible that all of us worked throughout most of the daylight hours, on more days than not, often in jobs so emotionally unrewarding that they might as well have been done by machines, just to put food on the table.

If things advance so far that AIs can do literally anything we can do better than we can (and I *think* that’ll take a breakthrough or two beyond LLMs, though I could sure be wrong), we’ll have to find sources of meaning beyond putting food on the table. I think we can. I hope we can.

(I recommend Bostrom’s quirky but entertaining book “Deep Utopia”. It’s not 100% convincing but it’s food for thought.)

Better Days Are A Toenail Away's avatar

I agree. There's definitely no stopping A.I. Which is another aspect of its encroachment that annoys me: A.I. advocates adopt this aggrieved tone/victimhood posture whenever someone criticizes it. Which is disingenous. They're not underdogs. They're going to take over society. Positioning themselves as misunderstood humanitarians doesn't sit well with me. They have made it clear they fully intend to replace human jobs.

The problem with trying to stay informed is the sheer volume of conflicting content. I'll read one article whose author is adamant that there is an A.I. bubble and the amounts of $ that Microsoft, Google, and Amazon (not Apple) are sinking into A.I. is going to destroy them. Then I'll see another article that says the exact opposite. The predictions of utopia and dystopia are likely both wrong. I think the future will land somewhere in the mushy middle between the two extremes. There is a great bifurcation coming, after which the world will be divided into haves and have-nots.

I guess I'm just particularly negative about A.I. today because yesterday morning I read this article: https://harpers.org/archive/2026/03/childs-play-sam-kriss-ai-startup-roy-lee/

Which profiles a few Silicon Valley characters and reads like satire.

And last night, a book a friend recommended to me, The Metamorphosis of Prime Intellect, arrived on my doorstep and after reading the first few pages, I couldn't put it down. I read the whole thing in one sitting.

The thesis of the novel is this: An A.I., that can provide us with everything we need by assuming control of physical reality will imprison us in hell. Our lives will have no meaning without struggle. People will be miserable.

Either way, I appreciate you engaging with me in good faith here.

paul teare's avatar

Trouble is you want know.

Better Days Are A Toenail Away's avatar

I guess there's only one foolproof way to make sure you don't read anything written by A.I. anymore.

Only read things written and published before its invention.

paul teare's avatar

Yes true. Imagine how this effects ethical applications for research?

Jim Shankland's avatar

What happens when you can't tell the difference?

Better Days Are A Toenail Away's avatar

If a day comes where I can't trust any new writer or music artist or movie because if I like it someone will bray "gotcha! So you DO like A.I. generated art!" I guess that would be annoying.

But there are already more books I want to read than I have time to read. I'll guess I'll just read stuff from before the enshittification of culture brought about by A.I.

Treekllr's avatar

One thing about the highly educated left, theyre pretty dense. There was no room for denial awhile ago, but here again is an "ai is coming for us" article. Its why this cohort fails miserably at actually getting shit done. For instance, *still* bitching about trump? The time for that has long since passed, yet thats about all the left can actually get done. Perhaps you should ask claude, how in the hell exactly did the democrats lose the election? And perhaps the answers will be enlightening. But i digress.

Id think any relatively smart person who has actually spent some time using ai came to the conclusions in this article some time ago. I sure did, and im not even that smart. It would behoove anyone who is concerned about ai to learn about it, by using it. To not do so is to bury ones head in the sand, which is *never* an intelligent response. But i get it, its fear. Fear that ones "creative" job will soon be done by bots, and with no other marketable skills.. well, we dont need a mathematician or a bot to do that math.

So yeah i guess keep ringing the alarm until all the smart people hear it, but just know youre behind, and moreso the more you keep defining the problem(instead of working on viable solutions, and actually *doing* them, ofc).

"I decided not to do so"

I guess you know who butters your bread, huh. But it touches on the point. We need courage, not caution. And that right there is why trump is beating the pants off the left. He goes at it like he has nothing to lose, and so wins. The left is too busy clutching what they have(much like the left brain, an interesting correlation, someone should look into that, or ask claude to), afraid to lose anything and thus unable to move, stuck in their monkey trap. Convinced that their "intelligence" makes them superior, "elite".

And just to be clear, i am not maga or a fan of trump or even right leaning, anymore than i am left leaning. I just calls em like i sees em

And then i read the comments, and see what youre up against lol. Yall would argue the color of grass if doing so promised to get you out of mowing it

A Series of Polite Chuckles's avatar

Bracketing other concerns, suppose that in 5 years most web content (words, images, video, code) has been generated by A.I. So what? Isn't the most likely outcome that it will all be so boring and pointless everyone will just stop paying attention? It is very impressive technology; Claude 5.3 is very good at coding from natural language prompts. It is, in some sense, incredible that an LLM can spit out a serviceable academic paper in seconds, but... but in some sense it's boring and trivial. The more seriously I interact with this technology (and I have been trying to take it as seriously as possible; as seriously as the innumerable articles like this one would have me take it), the more I am starting to suspect the boring and trivial qualities will be more determinant than the incredible ones.

Mani's avatar
Feb 16Edited

I will admit I did not want to read all of it due to fear it may be better than I want it to be, but from the abstract and what I read a bit afterwards the argument does not seem that in-depth or complex but appeared to be so with the name-dropping and the neologism of 'epistemic domination'. If I were to summarize what seems to be the argument in one sentence, it seems like the AI is arguing how that we think of corporations we think of them as being a smaller form of a state, but they differ in that AI companies also have power in determining how we think of knowledge. This is really not that in-depth at all, both of those things have been discussed before, the two thoughts do not have too much of a relationship with each other and it really just threw the two together and name-dropped to make it appear more profound. From what I have gathered it seems like all it has cracked was how to imitate this ivory-tower academese that has become standard in recent years, and throw together things human beings have thought.

The question I have is if it only had other AI papers to work with it eventually would implode to repeating the same things again and again, as presumably would become more common if academics continued to use these technologies. I have also heard from many people in the industry such as Yann LeCunn that LLMs could very well have hit a dead-end, that even with orders of magnitude more invested in them they could only get marginally better at most. Even Sam Altman has openly admitted that there may be nothing that can be done to fully get rid of the hallucinations, that it is an in-built issue with the design of the invention.

Also because human beings are themselves the actors involved in human societies, there will always be the need for humans to think about their social organization themselves.

Maybe I am saying this however as wishful thinking, because I very much do not like AI and after converting to skepticism after my initial doomerism, I am sticking with this position for now.

Jonathan D. Simon's avatar

"This is really not that in-depth at all, both of those things have been discussed before, the two thoughts do not have too much of a relationship with each other and it really just threw the two together and name-dropped to make it appear more profound."

Don't look now but doesn't that come within a semicolon of describing about 90 percent of human-generated theses?

Mani's avatar

I reluctantly have to agree; this tells you more about the state of academia today than AI actually surpassing humans in more ideal conditions. This would not fly fifty years ago when academia, although still at times abstruse, was still actually about arriving at original ways of thinking about society and political organization rather than citation machines. Now it is mostly needless obscurantism to confirm to oneself very simple sentiments and ideologies as being true - especially if it is about self-victimization - and that the author is in a privileged position to express it. No wonder there is an increasing suspicion of elites worldwide.

I still very much enjoy very abstract theoretical political texts, but I am hesitant to say that because I do not want to be associated with this kind of stuff - and most of these people who write these things are insufferable. Perhaps I am not much different and mostly insecure that AI can mimic it quite well.

The Radical Individualist's avatar

"Disciplines from literature to philosophy are needed to help us answer questions about how we can find a place in the world when we are much less needed than before,"

You can always become a plumber. You'd be surprised how popular those guys are. And the pay is better.

Leo Marcel Schuman's avatar

"Humanities" artifacts generated by LLMs are, by definition, inhuman.

Yes, these artifacts will obviously occupy, and likely overwhelm, a large and derivative media space. That space will be inhuman.

Some humans will choose to also occupy that inhumane space. Implants are coming, and cyborgs will soon pass from stories into reality.

Last night, a friend who works as an RN spoke of how patients squirm and resist being hooked into medical machines, even though they need them. So, hospitals sedate them. Eventually, after long enough under sedation, the patients stop resisting the machines. Often they die not long after this point, for having lost the will to live a life of their own.

Brandon Hendrickson's avatar

Left unexplored, I find, is AI's potential to revolutionize humanities education now. I've lately been laying out how this can be done (https://www.losttools.org/p/yes-you-can-just-ask-chatgpt), but the tl;dr is that anyone can now make curriculum that's as emotionally compelling as it is intellectually rich, for students as young as elementary school. The humanities have a brilliant future if they embrace AI, and start sharing their insights to non-specialists.

Trey's avatar

Agreed, there’s nothing sad about AI being super smart and productive. I hope it gets even better than humans at humanities so we can use and learn from it. A lot of the backlash is people who want to feel that they are special and the humanities are sacred. It’s the classic story of threatened, ethnocentric elites getting in the way of genuine progress.

AspactusJunior's avatar

I'm not sure I agree. I should say I am or was a literary academic. I studied the intersection of literature and political ideas. I am not in denial about AI. I encourage my colleagues to use it and teach their students to use it. I recognise its capability.

Firstly, the political theory article. Yes, it's plausible. But if you pay close attention you will see that the intellectual architecture of each passage is virtually the same, and is reflected in the similar structure of many of its sentences. After a while you feel like you're taking the same turns in a maze. Its account of ideological hegemony is also thin: a genuine Gramscian Marxist would do a better job, by virtue, I'd suggest, of actually believing it

Secondly, your main point. I will not address it directly. I will talk instead of value and what it means to be human at least for me, and by analogy.

I'm not sure you've noticed but in the last 10 years the quality of synthetic plants and flowers has improved drastically. The cost of the best imitations has fallen and you can now buy amazing specimens online. You have to be every close to detect the difference. However, I, personally, cannot imagine having any in my house. I want organic plants that I need to nurture, that can grow and die, and precisely because of this, because they bring the breathing meadow and swaying forest into my home, nourish me. To be surrounded by organic life even if it looks no different and is perhaps inferior in size and colour to its artificial imitation, has a completely different spiritual effect. What's the point of being surrounded by plastic leaves? I don't think I'm alone in that.

Art and literature is about the human condition. It matters that a human is expressing it, because the very act of doing so is part of that expression. We are talking to each other about what it means to be us. However refined and clever or insightful AI can be about human beings, it cannot speak with that authenticity. Not least because it feels no release or refreshment thereby itself. You focus on the quality or verisimilitude of the finished product and dismiss the issue of origin. It is true that at the point of reception maybe we cannot tell the difference any more. But this is to miss the point. AI has no need to write about the human experience. We have that need.

It is better to be poor and free than wealthy but enslaved. This is a remoter analogy but the same thinking underpins it. The techno rationalist might argue: what is different about your condition, your daily experience? And the human being answers, everything.

A human traveller through the infinite reaches of the galaxy, on a voyage of exploration and discovery, returns at last to earth. Who does she want to speak to? Not the fake Einsteins and chatbot sex toys her billionaire funders furnished her with, to ease her solitude in the vast reaches of space. No, she will find a down at heel bar, or bench in a local district square, where the sweet wrappers and dead leaves whirl, and there enjoy the company, at last, of hairless hyper-evolved apes like herself, the grandmothers and dustmen, the alcoholics and harrassed professionals. If you do not understand this, maybe you do not understand our species.

Some will argue that these arguments are sentimental and incoherent, even mystical. Agreed, terms like spiritual sound religious, although really they are only a shorthand for a more scientific vocabulary. But ultimately yes I do want to emphasize meaning and to insist that without meaning there is only complexity of pattern. Our meaning is linked to our individuality, our embodied subjectivity, and our mortality. One day AI may replicate these things, though it cannot yet. It has no subjectivity that we can tell of. It is disembodied. It is not mortal; at least, not as we are, not as we experience the knowledge of life and death. But even if it could simulate such things, though doubtless only at our invitation or command, these dimensions would not define its being as they do ours. There lies all the difference.

JakeH's avatar

I would like to be able to say that the paper is shit, but I can't honestly do that. It's pretty good. I'm not persuaded by it, but it's pretty good.

My least favorite aspect is the term "epistemic domination," which I don't think captures well the nature of the problem the paper discusses. Epistemic domination suggests the power of an outside force to control access to information. The real culprit, however, is something like the opposite -- users' complete freedom to choose whatever information they want, no matter how ridiculous, unmediated by any gatekeepers. With AI, they can even get their computer to make up stuff they want to hear on the spot, and they can do that until they hit their limit and pony up for a higher-level subscription. All that is what's new. The result isn't epistemic domination but epistemic chaos.

Social media and AI companies don't actually control our access to information, nor even people's "encounters" with it, to say nothing of the "infrastructure of cognition itself." (Now there's a phrase.) I get what Claude-as-political-theory-student is saying, but there's nothing stopping people from ignoring their news feeds and reading, say, The New York Times and The Wall Street Journal every day side-by-side. Indeed, it's easier than ever to do that or similar. Few do? But few did 30 years ago. Media has always been compromised by corporate interests seeking to make a buck by capturing rubes' attention by delivering what large numbers of them want to see or hear.

As Claude points out, the new thing is "personalization," but that's not all. Choosing a Fox News-ish diet for one's news feed, or letting the algorithm do that, isn't much different from just watching a lot of Fox News. The difference is scale, both in terms of depth -- you can get a lot more of it, more than anyone could possibly consume -- and, especially breadth -- you can get a lot that's cuckoo bananas, stuff that nobody would've published 30 years ago except crackpots printing fliers in basements.

The problem, in short, isn't Tocqueville's "formidable circle drawn around thought" but that there isn't a circle anymore. Oh, for a circle. Not for nothing do many of us regret the "democratization of media."

Okay, so I just spent a little chunk of time writing an argument against a robot's pretty good paper. What does this mean for academia, especially in the humanities?

I'm a bit surprised to learn that this would find favor in a major journal. It strikes me as short and student-y. While it's thankfully unburdened by jargon-heavy po-mo crapola, it's hardly the bolt of clarity one appreciates in the best academic writing. One approach to the problem posed here is to increase standards for major written work. If it's so easy to produce at this level now, require that students and academics alike produce that which is more and better. To the extent that academia is dominated, as one hears tell, by relentless pressure to publish *something*, however strong, go ahead and relieve that invitation to fraud and invite rather academics to produce magnum opuses fueled by a native enthusiasm to express what is authentically their own.

One might object to the f-word with something like the following thought: Surely the point is to generate ideas. What does it matter where the ideas come from? If Claude discovered a new medical cure or solved an age-old math problem or hit upon a fresh take on a philosophical paradox, we wouldn't care that the LLM spit it out. Indeed, we'd want to know about it. Can't the same be said for humanities and social science research and theory generally?

Fair enough. At the same time, it strikes me as a species of plagiarism to take credit for work that is not one's own. Academia may permit and even encourage the production of AI-generated material -- no shame! -- but ask merely that, as with conventions on things like movie credits, require honest attribution, as in "Based on a story by Claude in response to prompt X." Perhaps there could be AI journals consisting of AI-generated drafts edited and curated by academics, and they could even include online a record of the prompts and edits so people could see them.

The more immediate problem, however, is the threat AI poses to the development of human intellect in the first place. Students at the high school and undergraduate levels are now cheating like crazy. In this context, when one is first learning how to think about various issues, cheating, fraud, are very much the right words. Students offload cognitive effort merely because they can and still get the grade, a grade meant to represent the quality of one's cognitive efforts.

The solution to this problem is simple. Require students to produce their own thoughts that demonstrate that they actually know their stuff. For college, I have in mind the law school model, where a student's course grade is based entirely on a three-hour written exam, where students respond to a prompt or series of prompts not disclosed in advance. Such exams could be open-book or even open-note at the discretion of the instructor but not open-internet. For high school, the answer is similarly simple: a majority of a student's grade would consist of performance on in-class, un-tech-assisted assessments. This isn't hard, and it's the norm for standardized tests like AP exams, SATs, the LSAT, bar exams, etc.

In this regard, AI may prove a godsend because it will force a perhaps overdue reckoning with a decline in intellectual rigor in education and academia generally. When high school and college work product become such an obvious joke, it becomes impossible -- at least I hope it does -- to maintain the fiction that students are really learning.

Oliver Bateman Does the Work's avatar

As head of research content for a consultancy, I now work as "keeper of the AI," maintainer of the Claude Code repositories, sentence by sentence de-AIfier (simply because it's best to keep the tells to a minimum, if only to suggest human review, unless you're an org proudly boasting of "touchless AI"). For my money, there are still too many tells here and I'd subject this to many more rounds of review in markdown. "Not A but B," "if it's B, then A," &c. "It is remarkable," strange use of "paradox," etc. It's serviceable, though this is still a LEGO construction-method sort of job. Even so, serviceable is more than enough (when money is involved) to justify the deskilling of most paper jobs, much as "good enough" 3d-printed homes will be able to challenge the trades in a few years. Whether anyone likes it or anything is done to halt the escalation - nothing, I suspect, or as little as possible - is another matter and not one I'll have much say in.

steve hardy's avatar

On the paper itself, I posed this comment to ChatGPT and got the following answer:

I detect a sense that the author has a romantic view of government, as opposed to the more critical view we get from Public Choice Theory.

Yes — and you can put your finger on the difference pretty cleanly.

Where the “romance” shows up

The paper’s moral baseline is basically: “It’s bad when a private actor can shape the epistemic environment without democratic mandate, transparency, or accountability.”

That leans on a civics-textbook ideal that “the democratic public” (or government) is the legitimate locus of accountability.

A Public Choice reader hears that and thinks: Accountable to whom, exactly? Politicians, regulators, and agencies are also self-interested, captured, and often less transparent than the theory assumes.

Public Choice lens: the symmetry problem

Public Choice would press: if epistemic domination is “arbitrary control over the information environment,” why assume the cure is more state control?

Governments can also dominate epistemically: propaganda, licensing, curriculum mandates, pressure on media, secrecy, classification, funding biases, “disinformation” bureaucracies, etc.

Regulators are subject to capture by the very firms being regulated, plus ideological capture by professional/NGO ecosystems.

Voters are rationally ignorant; “democratic accountability” can be weak in practice.

So the Public Choice critique is: the author treats “democratic authorization” as a constraint, when in reality it can be a label that masks other forms of domination.

Still, the paper’s point can survive Public Choice (if reframed)

You can rescue the argument by changing the target from “private vs public” to “concentrated, uncheckable control — whoever holds it.”

A Public Choice-friendly reformulation would be:

The problem is monopoly-like control over key epistemic infrastructure.

The goal is polycentric checks and exit, not trusting a benevolent regulator.

The question becomes: what institutional arrangements make it hard for any actor (firm or state) to unilaterally shape the epistemic environment?

Examples of what that might look like (conceptually):

interoperability / portability (lower switching costs, more exit),

competitive access to distribution channels,

decentralization and open standards,

transparency that can be audited by multiple adversarial parties (not just one agency),

hard limits on state censorship and on state–platform collusion.

A good one-sentence critique you could use

“The paper treats ‘democratic accountability’ as a reliable safeguard against epistemic domination, but Public Choice suggests government actors face the same incentives and capture risks—so the real issue is concentrated power over the epistemic environment, whether private or public.”

I

JakeH's avatar

That is so ChatGPT.