The Humanities Are About to Be Automated
AI can now write convincing academic papers. There’s no more room for denial.
There has long been a divide between those who believe that artificial intelligence is an enormously transformative technology and those who believe that it is hopelessly overhyped. Never has that divide been more enormous than in the last weeks.
Most people who have regularly used the technology in the recent past believe that the rate at which the state-of-the-art models progress is extremely rapid. They are now capable of engaging in long chains of reasoning that dramatically increase their problem-solving abilities in areas from financial analysis to molecular biology. Their ability to one-shot practical tools, from data visualizations to websites, is astonishing. They have become so good at coding that some of the most accomplished computer programmers in the world now say they barely write any of their own lines anymore. And just the other day, leading scientists presented a genuinely new finding in astrophysics, one that corrected long-held assumptions in the field, that was produced by a large language model.
And yet, many people I know, especially in the humanities, continue to be in deep denial about the state of this technology. Perhaps the most common way in which they dismiss the importance of this technology is to claim that AI models are not intelligent or creative. Sure, they say, these stochastic parrots can do some impressive things like summarize an email or write boilerplate corporate language; but they are congenitally incapable of making a genuine intellectual or artistic contribution.
A lot of the time, the people who make this argument are not very clear on what they mean by intelligence or creativity, and keep shifting ground when you push them on their definition. If you manage to pin them down, it usually becomes evident that their argument is circular: they believe that AI systems cannot be intelligent or creative because only texts or works of art produced by humans are instances of intelligence or creativity.
I don’t want to get consumed by a futile semantic debate about how to use those words. What I do know is that I consider the ability to make a novel, interesting, and plausible argument about politics to be one important indicator of intelligence and creativity, and that I devoted a long stretch of my early adult life to developing the ability to do so at a high level. So when, still jet-lagged from a recent trip to Europe, I woke up well before the crack of dawn a few days ago, I decided to see whether the newest AI models would be capable of writing a competent academic paper in my field of study, political theory. The result both elated and depressed me.
I spent the better part of a decade primarily devoting my intellectual energy to the study of political theory. That required me to master some of the most influential texts in the history of political thought; to make careful distinctions between different ideas and follow the premises of an argument through to their logical conclusion; and (the part I always struggled with the most because I found it so boring) to learn the conventions of academic publishing, developing a sense of the implicit norms which govern what you do and don’t do in a “publishable” academic paper.
Though little of either my academic or my public work now falls squarely into the category of political theory, narrowly understood, this training has deeply shaped how I write, and I am immensely grateful for it. Years of thinking about the nature of political arguments taught me how to understand the structure of all kinds of questions about the social world, to spot the logical inconsistencies in both scholarly articles and social media posts, and to turn a nascent idea into a cohesive argument at rapid speed. The core skills imparted by a rigorous study of political theory provided the intellectual foundation for much of my work, whether I was engaged in trying to understand the conditions under which democracies might come to be more vulnerable to demagogues than was widely believed at the time in The People versus Democracy, or to explain and critique the rise of a new ideology on the American left in The Identity Trap.
In fact, it is, I think, hardly a coincidence that political theorists tend to punch above their weight in public debate in the United States. The list of prominent thinkers and commentators who are at least partially trained in political theory is surprisingly long, especially given the field’s tiny size: to name but a few, it includes George Will, Andrew Sullivan, Michael Sandel, Bill Galston, Danielle Allen, and Patrick Deneen.1
In some ways, it should be easier for artificial intelligence to crack political theory than some other fields. After all, political theory doesn’t require collecting or analyzing new datasets, and many of the field’s canonical texts are likely to be strongly represented in the training data for AI. But in other ways, it is a high bar: Making an original argument in the field requires both a demanding mix of interpretative and argumentative skills, and a new idea on a topic people have argued about for thousands of years. The ability of large language models to create a convincing work of political theory thus seems to me a good test case for the debate about whether artificial intelligence is or is not intelligent.
(Claude’s opinion of the field, I must admit, is a little less flattering. Referencing my desire to disprove that AI is “merely a stochastic parrot,” Claude told me that “political theory may be the one field where ‘stochastic parrot’ is actually a compliment, since the whole discipline consists of creatively recombining things Tocqueville and Mill already said.” Ouch.)
I started by asking Claude Opus 4.6, with extended thinking enabled, to help me write a political theory paper that could be published in a top journal in the field. After consulting recent publications in top journals, it came up with five different ideas, some better than others. One, for which I later asked Claude to write a summary for general readers,2 reproduced here, felt to me like it held promise:
Political theorists have spent the last decade asking whether big corporations are like mini-governments—whether your boss is a kind of dictator, whether tech companies should be democratized. That’s the right instinct, but it’s looking in the wrong place. The real power that Google, Meta, and OpenAI exercise isn’t over your choices—it’s over the conditions under which you think. They don’t tell you what to believe; they shape what you encounter, what feels plausible, what questions seem worth asking, and increasingly, through generative AI, they produce the very material out of which your beliefs are formed. Tocqueville had a phrase for this: the “formidable circle drawn around thought.” Mill feared the same thing—that when a society loses the friction of genuinely competing ideas, even its true beliefs decay into “dead dogma,” held by rote and understood by no one. Both were responding to the communications revolutions of their own eras. Ours is more radical than anything they imagined, because for the first time in history, a handful of private companies control not just which ideas circulate, but the infrastructure of cognition itself—and they do so with no democratic mandate, minimal transparency, and almost no accountability. I call this epistemic domination, and I argue it’s the single greatest untheorized threat to self-governance in the twenty-first century.
In the next step, I uploaded two recent papers in Political Theory, a top journal, and asked Claude to give me an outline of the paper which follows the field’s conventions. After that, we went section by section, with me giving one round of high-level feedback on each. To give an example, I worried that the section it had written on John Stuart Mill spent too much time rehashing the arguments in favor of free speech from Chapter 2 of On Liberty, which would be very familiar to political theorists, tasking Claude with incorporating quotations from more obscure texts such as published letters.
Want to hear me in conversation with leading thinkers of our time? To catch up on recent episodes of The Good Fight with the likes of Francis Fukuyama and Steven Pinker—and avoid missing out on even more exciting ones we have in the works—sign up for ad-free access to full episodes by becoming a paying subscriber today!
And to listen on your favorite third-party app, click “Set up podcast” below! Once your personal link is properly installed, you should see “PREMIUM FEED” in the top-left corner of the podcast icon. If you are having trouble setting this up, please email support@substack.com.
The human feedback involved in this process certainly drew on my training in the field, but it was very minimal. Including the time it took Claude to generate the text, and the rather longer time it took me to read what Claude had written, it took less than two hours from when I had the idea to run this experiment to when the draft was finished.
The draft could certainly be improved in a few respects. There are certainly a few places in the argument where reviewers could come up with clever objections. Claude explicitly urged me to check the page numbers in some of the references for the quotations it used against canonical editions of the works by Tocqueville and Mill. It also asked me to double-check the wording of some of the quotes from less canonical texts, such as the correspondence between Mill and Tocqueville. (Since I wanted to present Claude’s output to you in its original form, I did not do this. The few references I looked up seemed to be correct, but I expect that there are some mistakes that would need to be corrected before the paper could be submitted to an academic journal.) But on the whole, the outcome was depressingly good: I am confident that it could, with minor revisions, be published by a serious journal.
The power of motivated reasoning is immense. If this little experiment should happen to draw a bit of wider attention, a lot of people will quickly claim that this paper is deeply flawed or outright stupid, and that it could never have been published in a serious journal. The way to definitively disprove this would have been to do a version of the Sokal Hoax, getting this paper accepted at a good journal under false pretenses before sharing it publicly.
I briefly thought about doing this. But since the ethics of submitting work to journals under false pretenses is contested, and institutional review boards of universities tend to frown upon the practice, I decided not to do so. Suffice it to say that I am convinced that someone with basic knowledge in political theory could pull such a feat off with ease. If you are a political theorist who is deeply convinced of the contrary, why don’t you give it a good-faith try?
But the comparison to the famous hoax by Alan Sokal, as well as the follow-up by Helen Pluckrose, James Lindsay and Peter Boghossian, is in any case misleading. Both of these efforts targeted journals in academic subfields that had gone deeply awry; they were a deliberate reductio ad absurdum. The paper produced by Claude, by contrast, is hardly absurd. While it has its shortcomings, and is not exactly earth-shatteringly original, nobody can seriously call it a reductio. Had a fellow student submitted it to my department’s graduate student workshop when I was doing my PhD, my respect for them would have gone up rather than down.
This is not meant to be a gotcha about political theory, a field I continue to admire greatly. It is a wake-up call about the future of the humanities. Even if you put aside the extent to which some fields in the humanities have been captured by ideology in recent years, the whole enterprise has started to look a little askew. Professors at top universities are—in good part because of what it now takes to get a good job in the field—more focused on publishing erudite contributions to niche debates in scholarly journals which only a handful of their colleagues will ever read than on teaching and mentoring the ever-dwindling ranks of their students.
This was always absurd. But once artificial intelligence is able to write these contributions in the stead of humans—not just in political theory but also in theology and art history and comparative literature—it is totally indefensible.
In some ways, the Age of AI will make the humanities more important than ever. Disciplines from literature to philosophy are needed to help us answer questions about how we can find a place in the world when we are much less needed than before, and what it is to be human when we are no longer the only ones capable of doing some of the things of which our species was once uniquely capable. But at a time when artificial intelligence can jump through the hoops that have over the past decades come to define an academic career in the humanities with growing ease, a radical reimagination of how we pursue and impart meaningful knowledge in these fields is desperately in order.
Don’t believe me that the AI-induced crisis of the humanities is upon us? Take a look at the political theory paper Claude wrote up in a couple of hours for yourself:
The Oligarchy of Scale: Epistemic Domination and the Conditions of Self-Governance
Abstract
Political theorists have increasingly turned their attention to corporate power, asking whether firms exercise authority, coercion, or governance in ways analogous to states. I argue that this framing, while productive, generates a characteristic blind spot. The most consequential form of private power in contemporary societies operates not through the direct exercise of authority over identifiable subjects but on the epistemic conditions under which citizens form their beliefs, preferences, and political judgments. Recovering two underappreciated strands of the liberal tradition — Tocqueville’s analysis of the democratic mind in Democracy in America and Mill’s theory of epistemic infrastructure in On Liberty — I show that both thinkers theorized precisely this form of power, though neither possessed the conceptual vocabulary to name it. Drawing on their insights and on recent work in postcolonial and feminist epistemology, I develop the concept of epistemic domination: the arbitrary capacity of an actor to shape the epistemic conditions under which a political community reasons and judges. I distinguish this concept from republican domination, epistemic injustice, and ideological hegemony, and argue that the rise of artificial intelligence corporations represents its most complete contemporary realization. Democratic theory, I conclude, has been operating with an incomplete account of what self-governance requires.
Keywords: epistemic domination, Tocqueville, Mill, corporate power, artificial intelligence, democratic theory
Political theorists have recently rediscovered the corporation. After decades in which concentrated private power received relatively little attention from the discipline, a growing literature now insists that the authority exercised by firms over workers, consumers, and communities demands the same scrutiny that political theorists have long directed at the state.[^1] Elizabeth Anderson’s account of the workplace as a site of “private government” has been particularly influential, revealing the extent to which employers exercise arbitrary, unaccountable authority over the daily lives of those they employ.[^2] Isabelle Ferreras and Hélène Landemore have extended this line of argument to ask whether democratic principles require the democratization of the firm itself.[^3] David Ciepley has urged political theorists to recognize the corporation as a distinctive political entity — chartered by the state, endowed with legal personhood, yet governed by neither democratic nor market principles alone.[^4]
These contributions share an important structural feature. Each models corporate power on categories drawn from the analysis of political authority: governance, coercion, domination, legitimacy. The question they pose is, in essence, whether firms exercise something sufficiently like political power that the normative frameworks we apply to states should be extended to them. This is a productive line of inquiry, and the answer is plainly yes. But it also generates a characteristic blind spot. By asking whether corporations govern, coerce, or dominate in ways analogous to states, this literature tends to look for corporate power where we have already learned to look for state power — in the direct exercise of authority over identifiable subjects.
I want to suggest that the most consequential form of private power in contemporary societies operates differently. It does not principally work through the direct exercise of authority over employees or the coercive imposition of terms on consumers, though it does these things too. It works, rather, on the conditions under which citizens form their beliefs, interpretations, and political judgments — on the epistemic environment that democratic self-governance presupposes but that democratic theory has largely taken for granted. When a handful of corporations control the infrastructure through which information is produced, curated, and encountered — when they shape not merely what people know but the frameworks within which they interpret what they know — they exercise a form of power for which our existing theoretical vocabulary is not well equipped.
This is not an entirely new anxiety. The concern that concentrated control over the conditions of thought might prove more dangerous to self-governance than the direct exercise of coercive power has deep roots in the liberal tradition, though those roots have not been adequately traced. In what follows, I recover two largely underappreciated strands of this concern. The first runs through Tocqueville’s analysis of intellectual life under democracy in Democracy in America, where his account of the “tyranny of the majority” extends well beyond social conformism to theorize how democratic conditions can narrow the very range of what he calls “thinkable thought.” The second appears in Mill’s On Liberty, where the celebrated defense of free expression rests on a less frequently examined argument about the structural conditions that a society requires in order to maintain its collective capacity for sound judgment.
Reading Tocqueville and Mill together in this way reveals that both were theorizing a form of power that contemporary political theory still lacks adequate resources to name. Both recognized that self-governance can be undermined not only by coercion, censorship, or the direct suppression of dissent, but by subtler transformations in the epistemic environment — transformations that operate below the threshold of any individual’s freedom being visibly curtailed. Both were responding, crucially, to communications revolutions in their own eras that threatened to concentrate influence over public opinion in new and unprecedented ways. Drawing on these historical resources, I develop a concept I call epistemic domination: the arbitrary capacity of an actor to shape the epistemic conditions under which a political community forms its beliefs, preferences, and judgments. This concept is related to, but distinct from, several neighboring ideas in contemporary theory — Philip Pettit’s republican domination, Miranda Fricker’s epistemic injustice, and the Marxist tradition of ideological domination — and I take care below to specify the differences.
The contemporary form of private power that most fully realizes the anxieties Tocqueville and Mill articulated is the large artificial intelligence corporation. Companies such as Google, Meta, and OpenAI do not merely distribute information, as newspapers and broadcasters have long done. They increasingly shape the conditions of cognition itself: the questions users encounter, the answers that appear plausible, the framings that feel natural, the alternatives that remain thinkable. Through the design of recommender systems, the training of large language models, and the curation of search results, these firms exercise an ongoing, infrastructural influence over the epistemic lives of billions of citizens — an influence that is largely unaccountable, frequently opaque, and concentrated in remarkably few hands. If epistemic domination names a real and distinctive form of power, then the rise of AI corporations represents not merely a new chapter in the familiar story of corporate concentration, but a qualitative transformation in the relationship between private power and democratic self-governance.
My primary purpose in what follows, however, is not to offer a policy agenda or a regulatory framework for addressing this transformation. It is to argue that democratic theory has been operating with an incomplete account of what self-governance requires — and that Tocqueville and Mill understood this more clearly than we have generally recognized.
Tocqueville and the Democratic Mind
The most familiar element of Tocqueville’s analysis of democratic culture is his account of the “tyranny of the majority.” In its popular reception, the phrase names a straightforward political danger: that a democratic majority can vote to oppress minorities, concentrating power as effectively as any despot.[^5] In more careful scholarly treatments, the concept is understood to describe something subtler — a form of social conformism in which democratic citizens, eager for the approval of their equals, suppress dissenting views out of a desire to belong.[^6] Neither reading is wrong. But both, I want to suggest, capture only the outer layers of Tocqueville’s concern. Beneath the political danger and the sociology of conformism lies a more unsettling argument about the epistemic consequences of democratic life — an argument about how equality of conditions reshapes not merely what citizens are willing to say, but what they are equipped to think.
The key texts are not the celebrated chapters on majority tyranny in the first volume of Democracy in America, but the less frequently examined opening sections of the second volume, where Tocqueville turns from democratic institutions to democratic habits of mind.[^7] He begins with a paradox. Democratic peoples, precisely because they recognize no natural intellectual authority among their equals, are driven to seek authority in the only source that remains available to them — the aggregate opinion of the many. In aristocratic societies, Tocqueville argues, individuals defer to the intellectual authority of particular persons or classes whose superior judgment they acknowledge. Democracy dissolves this deference. No citizen in a democracy regards any other citizen as inherently more competent to judge: “as citizens become more equal and more similar, the tendency of each blindly to believe a certain man or a certain class decreases.”[^8] The result, however, is not intellectual independence. It is a transfer of cognitive authority from identifiable individuals to an anonymous and overwhelming collective. “Not only is common opinion the sole guide that remains for individual reason among democratic peoples,” Tocqueville writes, “but also it has among these peoples an infinitely greater power than among any other.”[^9]
The mechanism of this transfer is worth attending to closely, because it is here that Tocqueville moves beyond anything that can be captured by the language of conformism or political oppression. The authority of the majority over the democratic mind does not operate through persuasion, nor through force. It operates through a kind of cognitive saturation:
The public among democratic peoples has a singular power the idea of which aristocratic nations would not even be able to imagine. It does not persuade, it imposes its beliefs and makes them penetrate souls by a kind of immense pressure of the mind of all on the intelligence of each.[^10]
The phrase is remarkable. An “immense pressure of the mind of all on the intelligence of each” is not a description of social disapproval or political coercion. It is a description of an epistemic condition — a condition in which the sheer weight of shared opinion reshapes the individual’s capacity for independent judgment, not by punishing dissent but by making it cognitively unavailable. The majority, Tocqueville continues, “takes charge of providing individuals with a host of ready-made opinions, and thus relieves them of the obligation to form for themselves opinions that are their own.”[^11] The language of “relief” is pointed: citizens do not experience this as constraint. They experience it as convenience — the comfortable delegation of cognitive labor to a source that feels, by its very ubiquity, authoritative.
This transformation matters because it changes the character of intellectual constraint. Under aristocratic conditions, the obstacles to free thought are visible and personal: a censor, a patron, a church. One can identify the source of constraint and, at least in principle, resist it. The cognitive authority of the democratic majority operates differently. It does not suppress particular opinions by force. It renders them implausible — not by refuting them but by depriving them of the social conditions under which they could be taken seriously. In the first volume, Tocqueville captures this in a passage that deserves more attention than it has received:
In America, the majority draws a formidable circle around thought. Inside those limits, the writer is free; but woe to him if he dares to leave them. It is not that he has to fear an auto-da-fé, but he is the butt of mortifications of all kinds and of persecutions every day.[^12]
The passage is typically read as a description of social penalties for dissent — ostracism, career closure, reputational damage. But the metaphor of the “formidable circle” suggests something more fundamental. A circle drawn around thought is not a punishment imposed after the fact of dissent; it is a prior constraint on the space within which thought operates. What falls outside the circle is not merely dangerous to express. It ceases, over time, to be genuinely available as a live intellectual option.
Sheldon Wolin’s magisterial study of Tocqueville captures part of this dynamic. Wolin reads Tocqueville as a theorist of the “total culture” of democracy — a thinker who understood that democratic power operates not primarily through institutions but through the pervasive shaping of mores, habits, and dispositions.[^13] This is right as far as it goes. But Wolin’s emphasis on democratic culture can obscure the specifically epistemic dimension of Tocqueville’s concern. The problem is not merely that democracy produces a distinctive culture of conformity. It is that democratic conditions tend to produce a distinctive epistemology — a set of default assumptions about where authoritative knowledge resides, what counts as a credible claim, and which questions are worth asking — that systematically narrows the range of perspectives available to the democratic community.
It is in the closing passages of the chapter on democratic belief that Tocqueville states this concern in its most radical form. Having traced the transfer of cognitive authority from individuals and classes to the majority, he pauses to take stock of what has been gained and lost:
I see very clearly in equality two tendencies: one that leads the mind of each man toward new thoughts and the other that readily reduces him to thinking no more. And I notice how, under the dominion of certain laws, democracy would extinguish the intellectual liberty that the democratic social state favors, so that after breaking all the obstacles that were formerly imposed on it by classes or men, the human mind would bind itself narrowly to the general wills of the greatest number.[^14]
The passage is devastating in its precision. Democracy simultaneously liberates and extinguishes — it breaks the old constraints of class and caste, but replaces them with a new and in some ways more thorough form of intellectual closure. The human mind, having freed itself from the particular wills of aristocratic masters, “binds itself narrowly” to the general will of the majority. Tocqueville is fully aware of the echo of Rousseau, and fully aware of the irony: the general will, that great emancipatory concept, here becomes the instrument of epistemic confinement. His final verdict is unsparing: “Men would not have found the means to live independently; they would only have discovered, a difficult thing, a new face of servitude.”[^15]
It is in the famous final chapters of the second volume that Tocqueville draws these threads together in his portrait of “soft despotism.” The passage is routinely cited in discussions of paternalistic government, but its epistemic dimension is rarely foregrounded. Tocqueville describes a form of power that “does not destroy, it prevents things from being born; it does not tyrannize, it hinders, compromises, enervates, extinguishes, dazes.”[^16] The verbs are revealing. This power does not “destroy” or “tyrannize” — it “prevents things from being born,” it “extinguishes” and “dazes.” These are verbs of epistemic foreclosure, not coercion. What soft despotism eliminates is not freedom of action in any conventional sense but the generative capacity of a society — its ability to produce new ideas, new interpretations, new ways of understanding its own condition.
Two features of Tocqueville’s analysis deserve emphasis. The first is that the epistemic threat he describes is structural, not intentional. No individual or institution sets out to narrow the range of thinkable thought. The narrowing is an emergent consequence of democratic conditions themselves — of equality, of the cognitive authority of the majority, of the displacement of individual judgment by collective opinion.
The second is that Tocqueville was writing in response to a specific transformation in the infrastructure of public communication. The 1830s and 1840s saw the rapid expansion of the mass-circulation press in both France and the United States — the “penny press” that for the first time made newspapers available to a broad democratic public.[^17] Tocqueville does not discuss the press extensively in these chapters, but his analysis is intelligible only against this backdrop. The “immense pressure of the mind of all on the intelligence of each” is not an abstraction; it is mediated and amplified by new communications technologies that, for the first time, made it possible for a single set of framings to reach a genuinely mass audience.
Tocqueville, then, offers something more than a sociology of conformism and something deeper than a warning about majoritarian despotism. He provides the elements — though not the finished form — of a theory of epistemic compression: the process by which the conditions of democratic life, mediated by the infrastructure of mass communication, progressively narrow the range of interpretations, framings, and questions available to citizens. The threat to self-governance he identifies operates not at the level of what citizens are permitted to think, but at the level of what they are equipped to think. This distinction will prove essential in what follows.
Mill and the Ecology of Opinion
If Tocqueville theorizes the demand side of epistemic compression — how democratic citizens come to delegate their cognitive authority to the majority — Mill theorizes the supply side: what happens to a society’s collective capacity for judgment when the conditions under which opinions are formed, tested, and revised become impoverished. The standard reading of Chapter 2 of On Liberty treats it as a defense of individual expression — the ur-text of the liberal free speech tradition, grounding the right to speak in the autonomy and dignity of the speaker.[^18] This reading is not wrong, but it understates the radicalism of what Mill is doing. His argument is not primarily about the rights of individuals. It is about the epistemic needs of the community. The question that organizes the chapter is not “What does the individual have a right to say?” but “What conditions must obtain if a society is to maintain its collective capacity for sound judgment over time?”
Mill had been developing this question long before On Liberty. In his 1836 essay “Civilization,” he offered a remarkably direct analysis of the relationship between the expansion of mass communication and the degradation of public intellect. The essay observes that as the instruments for forming and circulating opinion grow more powerful, they do not — as optimists expected — elevate the quality of public discourse. They tend instead to flatten it. The growth of the reading public and the proliferation of cheap print do not produce a more discerning citizenry; they produce a citizenry more thoroughly saturated by a narrower range of ideas. “The individual becomes so lost in the crowd,” Mill writes, “that though he depends more and more upon opinion, he is apt to be attached less and less by any well-grounded opinion — less and less to have, properly speaking, any opinions of his own.”[^19] The problem is not ignorance. It is a specific form of epistemic dependence in which citizens are surrounded by more information than ever before, yet possess less genuine understanding — because the conditions for forming independent judgment have been eroded by the very institutions that supply them with ready-made conclusions.
Mill was also in direct correspondence with Tocqueville during this period, and their exchange reveals how closely the two thinkers were tracking the same anxiety from different national vantage points. In an 1840 letter responding to the second volume of Democracy in America, Mill wrote to Tocqueville that the great danger of the modern age was not political tyranny but the erosion of independent thought under conditions where the instruments of opinion-formation had grown overwhelmingly powerful.[^20] The letter confirms what the published works suggest: Mill understood himself to be engaged in a shared intellectual project with Tocqueville — the project of theorizing a form of power that operated not through coercion but through the structural transformation of the conditions under which citizens think.
On Liberty, published two decades after “Civilization,” represents the mature and systematic statement of this concern. Chapter 2 advances a series of arguments for the liberty of thought and discussion that are typically summarized as variations on a single theme: truth is best served by open debate.[^21] But this summary flattens what is in fact a carefully sequenced argument about the conditions under which a community maintains its epistemic health. Mill’s central insight is that even true beliefs degrade when they are not regularly tested against serious challenge. A belief held without understanding of the grounds on which it rests becomes what Mill calls “dead dogma” — mechanically repeated, poorly understood, and incapable of guiding action in new circumstances.[^22] “Both teachers and learners go to sleep at their post,” he warns, “as soon as there is no enemy in the field.”[^23] The concern here is not with the silenced individual but with the community’s relationship to its own convictions. A society in which true beliefs circulate as dead dogma has lost the capacity to understand why it believes what it believes.
Mill extends this argument in a direction that has not been sufficiently appreciated. Dissenting opinions, even mistaken ones, serve what we might call an ecological function: they preserve perspectives, framings, and partial insights that the dominant view has absorbed incompletely or not at all. When they are eliminated, the community does not simply lose a wrong opinion. It loses the corrective pressure that the wrong opinion exerted on the right one — the friction that kept the prevailing view honest, precise, and responsive to its own limitations. What Mill describes functions less like a marketplace and more like an ecosystem: a complex environment in which the health of the whole depends on a diversity of elements, including elements that are individually deficient. When the ecosystem is degraded — when dissenting views are eliminated or starved of the conditions in which they can survive — the result is not simply the loss of particular opinions but the slow atrophy of the society’s collective cognitive capacity. The meaning of received doctrine “is in danger of being lost, or enfeebled, and deprived of its vital effect on the character and conduct.”[^24]
What emerges from these arguments is not primarily a theory of individual rights but a theory of epistemic infrastructure — an account of the structural conditions that a society must maintain if it is to preserve its collective capacity for sound judgment. Those conditions include, at minimum: the active circulation of competing interpretations; the regular testing of prevailing convictions against the strongest available objections; the preservation of dissenting perspectives even when they are unpopular; and the maintenance of institutional spaces in which intellectual friction is not merely tolerated but actively sustained.
Mill, like Tocqueville, locates the primary danger not in state action but in structural conditions that operate below the threshold of any identifiable act of repression. The “tyranny of prevailing opinion” does not require a censor. It requires only the absence of countervailing pressure — an absence that can be produced as effectively by the sheer homogeneity of available opinion as by the deliberate suppression of dissent. What Mill fears is not persecution in its crude forms but the quiet disappearance of intellectual alternatives: not the burning of books but the failure to write them; not the imprisonment of dissenters but the slow erosion of the social conditions under which dissent remains a live possibility.
This concern was inseparable from the material transformation of the press that Mill witnessed over his lifetime. The mid-Victorian period saw a dramatic consolidation of British journalism. The repeal of the so-called “taxes on knowledge” — the advertisement duty in 1853, the stamp duty in 1855, and the paper duty in 1861 — had been championed by liberals as a blow for intellectual freedom, opening the press to new voices and broader participation.[^25] The actual result was more ambiguous. Cheaper production costs enabled the rise of mass-circulation newspapers, but the capital requirements for launching and sustaining a publication at national scale simultaneously concentrated ownership in fewer hands. The Daily Telegraph, founded in 1855, became the first British penny daily and reached a circulation that dwarfed its predecessors; by the 1860s, a small number of large-circulation papers exercised an influence over public opinion that would have been unrecognizable a generation earlier.[^26] Mill does not name these developments in On Liberty, but the essay “Civilization” had already identified the mechanism with uncomfortable clarity: the same technologies that distribute opinion more widely also tend to homogenize it, because the economics of mass communication reward breadth of reach over depth or diversity of perspective. The paradox — more readers, fewer genuinely independent sources of judgment — is precisely the structural condition that On Liberty‘s arguments are designed to address.
Read together, Tocqueville and Mill are theorizing the same phenomenon from complementary directions. Tocqueville traces how democratic equality generates an irresistible gravitational pull toward cognitive uniformity — the “immense pressure of the mind of all on the intelligence of each.” Mill traces what is lost when that uniformity goes unchecked: not merely the rights of individual dissenters, but the epistemic infrastructure on which the community’s collective intelligence depends. Tocqueville describes the force; Mill describes the damage. Both understood — with a clarity that has not been adequately recognized — that self-governance requires not merely the formal liberty to speak and vote, but the maintenance of background epistemic conditions that no account of rights, however generous, is sufficient to guarantee. Neither, however, possessed the conceptual vocabulary to name this form of power precisely. It is to the construction of that vocabulary that I now turn.
Epistemic Domination
Tocqueville and Mill were reaching for a concept that neither quite possessed. Both identified a form of power that operates on the conditions of thought rather than on thought itself — that shapes not what citizens conclude but the epistemic environment within which they reason. Both recognized that this power is more dangerous to self-governance than overt coercion precisely because it is harder to perceive: it does not suppress particular opinions but narrows the range of opinions that a community is equipped to form. Both observed that it is amplified by, and in important respects inseparable from, the material infrastructure through which opinion is produced and circulated. Yet neither developed a precise conceptual vocabulary for naming it. Tocqueville gestured toward it through metaphor — the “formidable circle around thought,” the “immense pressure of the mind of all on the intelligence of each.” Mill theorized its consequences — dead dogma, intellectual atrophy, the slumber of decided opinion — without isolating it as a distinct category of power. The concept I propose here is an attempt to complete what they began.
I call it epistemic domination — a term I adapt from recent work in postcolonial and feminist epistemology, where it has been used to describe the imposition of dominant knowledge systems on marginalized communities.[^27] In that literature, epistemic domination names the process by which colonial or hegemonic epistemic frameworks displace alternative ways of knowing — what Bethwell Ogot has described as the denial of intellectual agency to colonized peoples, and what Kristie Dotson and others have theorized as a structural form of epistemic oppression that goes beyond the interpersonal wrongs identified by Fricker.[^28] I retain the term because it captures something that these scholars have recognized more clearly than the mainstream of political theory: that control over the conditions of knowledge is a distinctive and irreducible form of power. But I redeploy it in a different register. Where the postcolonial usage centers on the displacement of one epistemic tradition by another — Western over indigenous, dominant over subaltern — I use the term to name a structural condition in which a single actor controls the infrastructure through which an entire political community encounters, evaluates, and interprets information, regardless of which epistemic tradition is at stake. The domination I describe is not the triumph of one way of knowing over another. It is the concentration of control over the conditions under which all knowing takes place.
More precisely: epistemic domination, as I use the term here, denotes the capacity of an actor to arbitrarily shape the epistemic conditions — the available information, the operative framings, the range of interpretations that appear plausible, the questions that are treated as worth asking — under which a political community forms its beliefs, preferences, and judgments. The emphasis on capacity is deliberate. Like Pettit’s republican domination, epistemic domination is a structural condition, not an event. It obtains whenever an actor possesses the ability to reshape the epistemic environment at will, whether or not that ability is exercised on any given occasion. A community subject to epistemic domination may, at any particular moment, enjoy a rich and diverse informational environment. The domination consists in the fact that this condition is contingent on the will of another — that the actor who controls the epistemic infrastructure could, if it chose, narrow, distort, or redirect the flow of information, interpretation, and framing on which the community’s judgment depends.
The emphasis on arbitrarily is equally important. Not all shaping of epistemic conditions constitutes domination. Editors exercise judgment about what to publish; teachers select curricula; governments fund some research programs and not others. These activities shape the epistemic environment, but they do so under constraints — professional norms, institutional accountability, democratic authorization, competitive pressures — that limit the arbitrariness of the shaping. Epistemic domination obtains when the shaping is not subject to adequate constraints of this kind: when the actor who controls the epistemic infrastructure is not meaningfully accountable to the community whose epistemic conditions it shapes, and when no institutional mechanism exists through which that community can contest, check, or redirect the shaping.
Four existing bodies of work approach the phenomenon I am describing. Each illuminates an aspect of it, but none captures the whole.
The postcolonial and feminist epistemology literature from which I draw the term comes closest to recognizing epistemic domination as a structural condition rather than an interpersonal wrong.[^29] Irfan Ahmad has usefully emphasized the temporal dimension of epistemic domination — the way in which dominant knowledge systems naturalize particular conceptions of historical time and progress, rendering alternative temporalities unintelligible.[^30] This structural orientation is essential to the concept as I develop it here. But the postcolonial usage is primarily concerned with asymmetries between epistemic communities — with the domination of one tradition of knowledge by another. The form of epistemic domination I identify operates within a single political community and affects all its members, including those whose epistemic traditions are dominant. When a handful of corporations control the infrastructure through which information is encountered and interpreted, the resulting domination is not a matter of which cultural tradition prevails. It is a matter of who controls the background conditions against which all traditions — dominant and marginalized alike — must operate.
The second neighboring concept is Philip Pettit’s republican conception of domination as arbitrary interference.[^31] Pettit’s account has the right structural form: domination is a condition of vulnerability to arbitrary power, not a discrete act of oppression, and it can obtain even when the dominating party is benevolent. But Pettit’s framework is oriented toward interference — toward acts that constrain the choices available to the dominated agent. Epistemic domination does not primarily constrain choices. It shapes the conditions under which choices are conceived. A citizen subject to epistemic domination may face no interference whatsoever in the Pettitian sense: no one restricts her options, penalizes her decisions, or coerces her behavior. What is shaped, rather, is the informational and interpretive environment within which she identifies her options, understands their significance, and forms her preferences among them. This is a form of power that operates upstream of the domain in which Pettit’s domination does its work — not at the point of decision but at the point where the materials for decision are assembled.[^32]
The third neighboring concept is Miranda Fricker’s epistemic injustice, which identifies two forms of wrong done to individuals in their capacity as knowers: testimonial injustice, in which a speaker’s credibility is unfairly deflated due to prejudice, and hermeneutical injustice, in which a gap in collective interpretive resources prevents someone from making sense of their own experience.[^33] Fricker’s framework is illuminating, particularly her recognition that epistemic wrongs can be structural rather than merely interpersonal. But her account is oriented toward injustices suffered by particular knowers — individuals or groups who are disadvantaged in specific epistemic transactions. Epistemic domination, by contrast, is a condition that affects the entire community, including those who benefit from it in the short run. The domination is not a matter of who is believed or whose experience can be articulated. It is a matter of who controls the background conditions against which all believing and articulating takes place.
The fourth neighboring concept is the Marxist and Gramscian tradition of ideological domination — the claim that ruling classes maintain their power in part by shaping the beliefs, values, and interpretive frameworks of subordinate classes.[^34] This tradition captures something essential: that power can operate through the constitution of the taken-for-granted, not merely through force or institutional authority. But ideological domination, in both its classical and Gramscian forms, presupposes a relatively coherent ideological content — a set of beliefs or values that serves the interests of the dominant class and is propagated, deliberately or through structural mechanisms, to the dominated. Epistemic domination need not operate through the imposition of any particular ideology. It can operate through fragmentation as readily as through false consciousness — through the production of confusion, the erosion of shared epistemic standards, the flooding of the informational environment with noise, or the algorithmic amplification of engagement over accuracy. An actor who dominates a community’s epistemic conditions need not persuade that community of anything in particular. It is sufficient to control the infrastructure through which persuasion, inquiry, and interpretation take place.
The concept of epistemic domination thus occupies a distinctive position in the theoretical landscape. What Tocqueville described as the narrowing of “thinkable thought” and Mill theorized as the atrophy of society’s capacity for independent judgment is, I am suggesting, best understood as a problem of domination — of arbitrary, unaccountable control over the epistemic conditions on which self-governance depends.
One feature of this concept deserves special emphasis. Epistemic domination represents a threat to democratic self-governance that is in a critical sense prior to the concerns that occupy most democratic theory. Theories of deliberative democracy presuppose citizens who arrive at deliberation with beliefs, preferences, and interpretive frameworks that are, in some meaningful sense, their own.[^35] Theories of aggregative democracy presuppose that the preferences expressed through voting reflect the authentic judgments of voters.[^36] Theories of representation presuppose that constituents are capable of evaluating the performance of their representatives against some standard of their own devising.[^37] Epistemic domination threatens each of these presuppositions at its root. If the conditions under which citizens form their beliefs, preferences, and interpretive frameworks are themselves subject to arbitrary control by an unaccountable actor, then the outputs of deliberation, aggregation, and representation are compromised at their source — not because any particular stage of the democratic process has been corrupted, but because the epistemic inputs on which every stage depends have been shaped by a power that is neither democratically authorized nor democratically accountable. This is the sense in which democratic theory has been operating with an incomplete account of what self-governance requires: it has attended carefully to the procedures through which collective decisions are made, while largely taking for granted the epistemic conditions that make those procedures meaningful.
The Contemporary Realization
Every communications revolution generates new possibilities for epistemic domination — and new theoretical vocabularies for understanding them. Tocqueville wrote in the shadow of the mass-circulation press; Mill watched the consolidation of Victorian journalism concentrate opinion-formation in fewer hands. Each identified a structural threat to the epistemic conditions of self-governance that the previous generation’s theoretical vocabulary was inadequate to name. We are living through an analogous moment, though the transformation underway differs from its predecessors not merely in scale but in kind.
The difference can be stated precisely. The mass-circulation press that Tocqueville and Mill confronted was a technology of distribution. It determined which opinions, interpretations, and framings reached a broad public, and it concentrated that determination in a relatively small number of editors and proprietors. But it operated on finished epistemic products — articles, editorials, arguments — that were composed by identifiable authors, subject to professional norms, and available for public scrutiny. The epistemic environment was shaped, but the mechanisms of shaping were legible.
The infrastructure controlled by contemporary technology corporations operates at a different level. Companies such as Google and Meta do not merely distribute finished epistemic products to a mass audience. Through the design of search algorithms, recommender systems, and content curation mechanisms, they shape the conditions of epistemic encounter itself: the process by which individuals come to be presented with information, the criteria by which some claims are rendered salient and others invisible, and the framings within which new events are interpreted. The distinction matters because it marks the difference between controlling a channel and controlling an environment. A newspaper editor decides what to print. A search algorithm determines what it is possible to find. A recommender system shapes not merely which opinions a citizen encounters but the sequence, context, and affective framing in which she encounters them — conditions that cognitive science has shown to be at least as consequential for judgment as the propositional content of the opinions themselves.[^38] Shoshana Zuboff’s influential account of “surveillance capitalism” captures important dimensions of this transformation, particularly the extraction of behavioral data and its deployment for purposes of prediction and modification.[^39] But Zuboff’s framework is oriented toward the economic logic of data extraction and the threats it poses to individual autonomy. It does not theorize the specifically political dimension of the problem — the way in which corporate control over epistemic infrastructure undermines the conditions of democratic self-governance as such. The concept of epistemic domination names a political condition, not merely an economic practice.
Three features of this infrastructure make it a qualitatively new site of epistemic domination, rather than simply the latest chapter in the familiar story of media concentration.
The first is opacity. The editorial judgments of a newspaper proprietor are, at least in principle, recoverable. One can read the paper, compare it with rivals, and form a reasonably accurate picture of how the proprietor shapes the epistemic environment. The algorithmic systems that curate search results and populate news feeds are not recoverable in this way. They are proprietary, complex beyond the interpretive capacity of individual users, and frequently beyond the full understanding of the engineers who designed them.[^40] Citizens subject to this form of epistemic shaping cannot, even in principle, identify the mechanisms by which their informational environment has been constituted. The circle drawn around thought is invisible to those inside it. This opacity does not merely make epistemic domination harder to resist; it makes it harder to perceive. Under the conditions Tocqueville described, citizens could at least recognize the social pressure to conform, even if they lacked the resources to withstand it. Under conditions of algorithmic opacity, the shaping of the epistemic environment presents itself not as pressure but as the natural order of available information — as simply what there is to know.
The second is personalization. Mass media, for all its concentrating effects, at least imposed a common epistemic environment. Citizens reading the same newspaper inhabited a shared informational world, even if that world was partial and curated. This commonality had a paradoxical benefit: precisely because the distortion was shared, it was in principle detectable. Algorithmic systems eliminate this possibility by producing individuated epistemic environments — news feeds, search results, and recommendation streams tailored to each user’s inferred preferences and behavioral profile. The result is not a single “formidable circle around thought” but millions of individually drawn circles, each shaped by the same infrastructure yet enclosing different informational worlds.[^41] Mill’s fear that the disappearance of intellectual friction would reduce living truth to dead dogma finds its most efficient realization in recommender systems that maximize engagement by presenting users with increasingly refined variations of what they already believe — precision instruments for producing the “deep slumber of a decided opinion.”
The third is generativity, and it marks the sharpest discontinuity — not only with the press that Tocqueville and Mill confronted, but with the social media platforms and search engines that have dominated discussion of technology and democracy over the past decade. Previous concentrations of communicative power, including the platform-based concentrations that currently preoccupy scholars and regulators, operate on human-produced content. They curate, amplify, or suppress, but the underlying epistemic material is composed by identifiable human agents with their own intentions and commitments. Large language models and generative AI systems introduce a genuinely novel element: the capacity to produce plausible text, argument, and interpretation at scale, without an identifiable author and without the constraints — accountability, reputation, professional norms — that have historically operated as checks on the production of public speech.[^42]
This last feature deserves particular emphasis because it distinguishes the emerging form of epistemic domination from the one that has occupied most critical attention thus far. The epistemic domination exercised through search engines and social media platforms is primarily a domination of selection: it determines which human-produced claims, arguments, and framings are made visible or invisible to which audiences. This is a serious form of power, and the analysis of opacity and personalization applies to it fully. But it remains parasitic on a substrate of human epistemic production. The form of epistemic domination that generative AI makes possible is qualitatively different. It is a domination not merely of selection but of production — the capacity to generate plausible epistemic material at a scale and speed that can flood public discourse, simulate the texture of genuine argument, and erode the distinction between inquiry and fabrication. A corporation that controls both the generation and the curation of the epistemic material on which a political community relies does not merely shape which existing opinions citizens encounter. It shapes what there is to encounter.
Existing regulatory and theoretical frameworks are inadequate to this phenomenon. Antitrust doctrine addresses market concentration, and recent proposals to apply it to technology companies have gained significant traction.[^43] But antitrust has no vocabulary for epistemic concentration. Its remedies — breaking up dominant firms, preventing anticompetitive mergers, imposing interoperability requirements — operate on market structure. A technology conglomerate can be divided into smaller units that collectively exercise the same epistemic domination, because the domination derives not from market share as such but from control over the epistemic infrastructure through which citizens form their judgments.
Rights-based approaches fare no better. The liberal rights tradition offers powerful protections against identifiable acts of interference with individual expression — censorship, prior restraint, penalties for speech. But epistemic domination does not interfere with anyone’s expression. It shapes the background conditions against which expression takes place. A citizen whose epistemic environment has been constituted by algorithmic curation and generative AI has suffered no violation of her rights in any standard sense. She has not been silenced, censored, or coerced. She has been subjected to conditions that compromise her capacity for independent judgment without any of her formally recognized rights being engaged.[^44]
Platform regulation — the most rapidly developing area of policy response — treats technology corporations as media companies: publishers or distributors of content subject to obligations of accuracy, fairness, or transparency.[^45] This framing captures something real. But assimilating the power of AI corporations to the familiar category of media power misses the deeper structural transformation. A media company operates within an epistemic environment that it did not create and does not fully control. A corporation that designs the search algorithms, curates the information feeds, and deploys the generative models through which citizens encounter the world does not operate within the epistemic environment. It constitutes that environment.
What Tocqueville described as the emergent consequence of democratic equality and Mill theorized as the structural decay of intellectual life under conditions of homogeneity has, in the space of two decades, become the designed product of privately controlled infrastructure. The epistemic conditions of self-governance — the diversity of available interpretations, the friction between competing perspectives, the capacity of citizens to form judgments on the basis of an informational environment they can scrutinize and contest — are now contingent on the decisions of a remarkably small number of corporate actors who are neither democratically authorized nor democratically accountable. This is epistemic domination in its most complete contemporary form. The question it poses for democratic theory is not how to regulate a new kind of media company, but how to ensure that the epistemic conditions on which self-governance depends are not subject to the arbitrary will of any actor — public or private.
Conclusion
I have argued that democratic theory has been operating with an incomplete account of what self-governance requires. The tradition has attended carefully to the procedures through which collective decisions are made — to the design of deliberative institutions, the aggregation of preferences, the authorization of representatives — while largely taking for granted the epistemic conditions that make those procedures meaningful. Tocqueville and Mill understood, more clearly than we have generally recognized, that these conditions are neither natural nor self-sustaining. Both theorized a form of power that operates not on what citizens are permitted to think but on what they are equipped to think — a power that narrows the range of available interpretations, erodes the friction on which sound collective judgment depends, and does so without engaging the categories of coercion, censorship, or rights-violation through which political theory has traditionally identified threats to freedom.
The concept of epistemic domination is an attempt to give this form of power a precise name and a precise location in our theoretical vocabulary. It is not reducible to republican domination, epistemic injustice, ideological hegemony, or the postcolonial critique of dominant knowledge systems, though it draws on and learns from each. It identifies a structural condition — the arbitrary, unaccountable control of the epistemic infrastructure on which a political community depends — that is prior to and more fundamental than the procedural questions that have occupied most democratic theory. If the epistemic inputs of democratic life are themselves subject to domination, then no procedural reform, however well designed, can be sufficient to secure self-governance.
This paper has been primarily diagnostic rather than prescriptive, and deliberately so. The construction of institutional responses to epistemic domination — responses that might secure what we could call epistemic non-domination as a condition of democratic life — is a task that exceeds the scope of a single article.[^46] But the shape of such responses can at least be gestured toward. They would need to address not merely the content that circulates within the epistemic environment but the control of the infrastructure itself. They would need to ensure that no single actor — public or private — possesses the arbitrary capacity to reshape the conditions under which citizens form their judgments. They would need, in short, to extend the democratic demand for non-domination from the domain of political decision-making, where it has long been recognized, to the domain of epistemic conditions, where it has not. What Tocqueville and Mill understood, and what we have been slow to remember, is that the greatest threats to self-governance are often invisible precisely because they operate on the conditions of thought itself.
[^1]: For an overview of this turn, see [PT corporations piece, Vol. 53, Issue 4 — citation to be completed].
[^2]: Elizabeth Anderson, Private Government: How Employers Rule Our Lives (and Why We Don’t Talk about It) (Princeton: Princeton University Press, 2017).
[^3]: Isabelle Ferreras, Firms as Political Entities: Saving Democracy through Economic Bicameralism (Cambridge: Cambridge University Press, 2017).
[^4]: David Ciepley, “Beyond Public and Private: Toward a Political Theory of the Corporation,” American Political Science Review 107, no. 1 (2013): 139–58.
[^5]: This reading is prominent in constitutional theory and Federalist political thought. See, e.g., the invocations of Tocqueville in discussions of countermajoritarian institutions in Alexander M. Bickel, The Least Dangerous Branch: The Supreme Court at the Bar of Politics (New Haven: Yale University Press, 1962).
[^6]: For representative versions of this reading, see Jon Elster, Alexis de Tocqueville: The First Social Scientist (Cambridge: Cambridge University Press, 2009), 149–55; Larry Siedentop, Tocqueville (Oxford: Oxford University Press, 1994), 67–73.
[^7]: Alexis de Tocqueville, Democracy in America, trans. Harvey C. Mansfield and Delba Winthrop (Chicago: University of Chicago Press, 2000), Volume II, Part 1. All subsequent citations refer to this edition.
[^8]: Tocqueville, Democracy in America, II.1.2, 409.
[^9]: Tocqueville, Democracy in America, II.1.2, 409.
[^10]: Tocqueville, Democracy in America, II.1.2, 409.
[^11]: Tocqueville, Democracy in America, II.1.2, 410.
[^12]: Tocqueville, Democracy in America, I.2.7, 244.
[^13]: Sheldon S. Wolin, Tocqueville Between Two Worlds: The Making of a Political and Theoretical Life (Princeton: Princeton University Press, 2001), especially chapters 9–12.
[^14]: Tocqueville, Democracy in America, II.1.2, 410.
[^15]: Tocqueville, Democracy in America, II.1.2, 411.
[^16]: Tocqueville, Democracy in America, II.4.6, 663.
[^17]: On the transformation of the American press in this period, see Paul Starr, The Creation of the Media: Political Origins of Modern Communications (New York: Basic Books, 2004), 125–52.
[^18]: For a representative treatment, see Frederick Schauer, Free Speech: A Philosophical Enquiry (Cambridge: Cambridge University Press, 1982), 15–34.
[^19]: John Stuart Mill, “Civilization,” in Essays on Politics and Culture, ed. Gertrude Himmelfarb (Garden City, NY: Doubleday, 1962), 51. The essay was first published in the London and Westminster Review in April 1836.
[^20]: Mill to Tocqueville, 1840. The Mill-Tocqueville correspondence is usefully discussed in H. O. Pappé, “Mill and Tocqueville,” Journal of the History of Ideas 25, no. 2 (1964): 217–34. The specific phrasing should be verified against The Collected Works of John Stuart Mill, Vol. XIII, ed. Francis E. Mineka and Dwight N. Lindley (Toronto: University of Toronto Press, 1963).
[^21]: This is the standard pedagogical summary. See, e.g., Jonathan Riley, Mill on Liberty (London: Routledge, 1998), chapter 3.
[^22]: John Stuart Mill, On Liberty, ed. Elizabeth Rapaport (Indianapolis: Hackett, 1978), 34. All subsequent citations refer to this edition.
[^23]: Mill, On Liberty, 38.
[^24]: Mill, On Liberty, 39.
[^25]: On the “taxes on knowledge” and their repeal, see Martin Hewitt, The Dawn of the Cheap Press in Victorian Britain: The End of the ‘Taxes on Knowledge,’ 1849–1869 (London: Bloomsbury, 2014).
[^26]: Aled Jones, Powers of the Press: Newspapers, Power and the Public in Nineteenth-Century England (Aldershot: Scolar Press, 1996), chapters 4–5; see also Stefan Collini, Public Moralists: Political Thought and Intellectual Life in Britain, 1850–1930 (Oxford: Clarendon Press, 1991), chapters 1–2.
[^27]: The term appears with increasing frequency in postcolonial and feminist epistemology. See especially Kristie Dotson, “Conceptualizing Epistemic Oppression,” Social Epistemology 28, no. 2 (2014): 115–38; Bethwell A. Ogot, “Rereading the History and Historiography of Epistemic Domination and Resistance in Africa,” African Studies Review 52, no. 1 (2009): 1–22; and Vrinda Vaditya, “Social Domination and Epistemic Marginalisation: Towards Methodology of the Oppressed,” Social Epistemology 32, no. 4 (2018): 272–85.
[^28]: Dotson, “Conceptualizing Epistemic Oppression,” 115–20; see also Gaile Pohlhaus Jr., “Epistemic Oppression,” in The Oxford Handbook of Feminist Philosophy, ed. Kim Q. Hall and Ásta (Oxford: Oxford University Press, 2021), who recommends “epistemic domination” as the more encompassing term.
[^29]: Kiefer Roberts Harris, “Epistemic Domination,” Thought: A Journal of Philosophy 11, no. 1 (2022): 42–52, develops the concept in a broadly feminist epistemological framework, with attention to the structural conditions under which epistemic agency is compromised.
[^30]: Irfan Ahmad, “The Time of Epistemic Domination,” ReOrient 6, no. 2 (2021): 171–97.
[^31]: Philip Pettit, Republicanism: A Theory of Freedom and Government (Oxford: Oxford University Press, 1997), especially chapters 2–3; see also Pettit, On the People’s Terms: A Republican Theory and Model of Democracy (Cambridge: Cambridge University Press, 2012).
[^32]: The language of “upstream” and “downstream” in relation to preference formation draws on, though is not identical to, Cass Sunstein’s discussion of adaptive preferences in Free Markets and Social Justice (Oxford: Oxford University Press, 1997), 13–31.
[^33]: Miranda Fricker, Epistemic Injustice: Power and the Ethics of Knowing (Oxford: Oxford University Press, 2007).
[^34]: Antonio Gramsci, Selections from the Prison Notebooks, ed. and trans. Quintin Hoare and Geoffrey Nowell Smith (New York: International Publishers, 1971), especially 12–13, 52–55, 195–96. For a useful overview of the development of the concept of hegemony, see Perry Anderson, “The Antinomies of Antonio Gramsci,” New Left Review 100 (1976): 5–78.
[^35]: See, e.g., Jürgen Habermas, Between Facts and Norms: Contributions to a Discourse Theory of Law and Democracy, trans. William Rehg (Cambridge, MA: MIT Press, 1996), 304–8.
[^36]: See, e.g., William H. Riker, Liberalism Against Populism: A Confrontation Between the Theory of Democracy and the Theory of Social Choice (San Francisco: W. H. Freeman, 1982).
[^37]: See, e.g., Hanna Pitkin, The Concept of Representation (Berkeley: University of California Press, 1967), especially chapters 7–8.
[^38]: On the effects of framing, sequence, and context on judgment, the foundational work remains Amos Tversky and Daniel Kahneman, “The Framing of Decisions and the Psychology of Choice,” Science 211, no. 4481 (1981): 453–58. For a recent treatment of algorithmic framing effects specifically, see Cass R. Sunstein, “Algorithms, Correcting Biases,” Social Research 86, no. 2 (2019): 499–511.
[^39]: Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (New York: PublicAffairs, 2019). Zuboff’s analysis is invaluable for understanding the economic logic of data extraction, but her central categories — “behavioral surplus,” “prediction products,” “instrumentarian power” — are oriented toward the commodification of experience rather than the political conditions of self-governance. The concept of epistemic domination differs in locating the threat not in what corporations extract from citizens but in what they impose upon the epistemic conditions within which citizens reason and judge.
[^40]: On the opacity of algorithmic systems as a distinctive political problem, see Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (Cambridge, MA: Harvard University Press, 2015); and Jenna Burrell, “How the Machine ‘Thinks’: Understanding Opacity in Machine Learning Algorithms,” Big Data & Society 3, no. 1 (2016): 1–12.
[^41]: The fragmentation of shared epistemic environments is discussed in Eli Pariser, The Filter Bubble: What the Internet Is Hiding from You (New York: Penguin, 2011); for a more rigorous empirical treatment, see Eytan Bakshy, Solomon Messing, and Lila A. Adamic, “Exposure to Ideologically Diverse News and Opinion on Facebook,” Science 348, no. 6239 (2015): 1130–32.
[^42]: On the epistemic implications of generative AI, see Regina Rini, “Deepfakes and the Epistemic Backstop,” Philosopher’s Imprint 20, no. 24 (2020): 1–16, on the erosion of testimonial norms by synthetic media.
[^43]: See Lina Khan, “Amazon’s Antitrust Paradox,” Yale Law Journal 126, no. 3 (2017): 710–805; and Tim Wu, The Curse of Bigness: Antitrust in the New Gilded Age (New York: Columbia Global Reports, 2018).
[^44]: For a discussion of the limits of rights-based frameworks in addressing structural epistemic conditions, see K. Sabeel Rahman, “The New Utilities: Private Power, Social Infrastructure, and the Revival of the Public Utility Concept,” Cardozo Law Review 39, no. 5 (2018): 1621–89.
[^45]: On platform regulation as media regulation, see Philip Napoli, “Social Media and the Public Interest: Governance of News Platforms in the Realm of Market Failures,” Telecommunications Policy 39, no. 9 (2015): 751–60. The European Union’s Digital Services Act (2022) represents the most ambitious effort in this direction; see Regulation (EU) 2022/2065.
[^46]: I develop the concept of epistemic non-domination and its institutional implications in a companion piece currently in preparation.
The dissertation by my Persuasion colleague Francis Fukuyama was on international relations; but he studied political theory deeply, his dissertation was co-chaired by a political theorist, and his most influential works were nearly as much works of political theory as of comparative politics.
The substance of the original idea was very similar: “Political theorists have recently begun taking corporations seriously as political-theoretic objects (there’s a piece on this in Political Theory 53:4). But the AI companies represent something unprecedented: entities whose products directly shape the epistemic environment, whose internal governance structures determine the boundaries of public discourse, and whose scale gives them quasi-sovereign power—yet which are theorized, when they’re theorized at all, through frameworks (antitrust, regulation, rights) that were designed for a fundamentally different kind of corporate actor. The paper would reconstruct what Tocqueville, Mill, and the anti-monopoly tradition actually feared about concentrated power, and argue that those fears are more precisely realized by AI corporations than by the trusts and monopolies those thinkers actually confronted. The key move: what’s distinctive isn’t market concentration per se, but the epistemic dimension of the concentration—these companies shape not just what people can buy but what people can know and think.”





The purpose of academia is not to produce papers, it is to produce human beings capable of writing, understanding, and putting papers into practice. If generative AI begins to be used widely within academia, as seems likely, the institutions that use it will pretty quickly stop producing human beings capable of understanding academic papers, let alone writing them or putting the ideas contained in them into practice.
And in the same vein, I think a culture that uses generative AI in an unrestricted manner is going to be quickly out-competed by a culture that uses it sparingly. I hate to reduce things to Social Darwinism, but sometimes it's appropriate.
Such nonsense. Get back to me when an LLM can go to the archives, find rare manuscripts, read them, make some sense of them in light of previous work, and craft an original argument. Could say the same thing for fieldwork in the social sciences. All the LLMs know is what they’ve been trained on, and all of that knowledge is the product of hard won research and thinking by humans.