Yascha Mounk
The Good Fight
David Autor on the Scars That Money Can’t Heal
Preview
0:00
-54:48

David Autor on the Scars That Money Can’t Heal

Yascha Mounk and David Autor explore why you can’t compensate workers for lost dignity with a check—and what that means for future economic shocks.

David Autor is the Daniel (1972) and Gail Rubinfeld Professor in the MIT Department of Economics, and codirector of the NBER Labor Studies Program and the James M. and Cathleen D. Stone Center on Inequality and Shaping the Future of Work.

In this week’s conversation, Yascha Mounk and David Autor discuss whether the economic pessimism of the 2010s was justified, what lessons we failed to learn from the China trade shock, and how artificial intelligence will reshape the American job market.

This transcript has been condensed and lightly edited for clarity.


Yascha Mounk: I was thinking in preparation for this talk about how the broader shape of the economic conversation has changed over the course of the last 15 or 20 years. It strikes me that there was a period of deep pessimism in the 2010s. This came from Thomas Piketty’s work on rising inequality, and from Branko Milanovic’s famous “elephant curve,” which suggested that while the global very rich gained the most from globalization, the global middle class did not gain much at all. It also came from your work regarding the “China shock” and the declining middle of the American job market.

Ten or 15 years on—and all of this is before we even get to the subject of AI—it feels like there is a little bit more optimism. Piketty’s work has been critiqued quite widely. Milanovic has updated his chart, and it seems to be much more positive, showing much more broad-based gains from globalization. You also published an interesting paper a few years ago saying that, at least since the COVID-19 pandemic, less affluent American workers have actually done comparatively better than more affluent ones. Was the pessimism of the 2010s a mistake, or does some of it remain?

David Autor: I do not think it was a mistake, but it is good that there is positive news as well. Something we learned from the work I did with Gordon Hanson and David Dorn on the China trade shock was how scarring rapid labor market change can be when it involves the loss of critical sectors or career jobs.

What has been very positive since around 2015, at least in the United States, has been relatively robust wage growth in the bottom half of the wage distribution. This has been pretty pervasive; it started before the pandemic, but then it really took off. In the United States, a lot of what was going on before the pandemic was the rise of minimum wage laws across many states. We did not actually see much wage compression outside of states that did not raise their minimum wages. Starting with the pandemic, however, there was just across-the-board tightening. That has been really dramatic and positive. I think a lot of that has to do with running tight monetary policy and having demographic tightening—real labor scarcity has contributed to that.

Simultaneously, I think many things remain concerning, even before we get to the present era. A lot of the job growth among workers without a college degree is not in stable career jobs; much of it is in hourly service jobs that are comparatively low-paid, not very economically secure, and lack high lifetime returns on specialized skills. Growing income concentration has been with us this entire time.

The China trade shock, in the form that David, Gordon, and I studied, was a very concentrated period that ran its course in its own way. However, the competition from China is now actually much more significant. It is not just about jobs making commodity products, but competition in the core leadership sectors of technology that have both civilian and military applications. This will affect the prosperity of the United States, Western countries, and democracies very broadly.

Mounk: Let us hit a couple of those points. First, we were talking about how scarring the China shock was. There are two elements here. One is that economists say there are gains from free trade and, of course, winners and losers. They argue this is fine because, since there is an aggregate gain, you can redistribute some of those gains from the winners to the losers. In practice, of course, redistributing those gains is very hard and tends not to happen.

The second point is that even if the losers might be made whole in some material sense—even if we found mechanisms where a worker with a union job in a car factory who is now unemployed is given enough money to avoid an income shortfall, which seems unlikely—there is probably a psychic scar. There is a sense of no longer being necessary that remains.

To what extent do you think those were genuine policy mistakes which explained the China shock and made it as bad as it was, with all the political consequences that followed, and to what extent was it inevitable? To what extent was it good and necessary to get China into the global economy? Would forms of automation perhaps have taken some of those jobs if the China shock had not happened? With the benefit of hindsight, were real mistakes made there, or were some of those disruptions inevitable?

Autor: First, let me address your point about the nature of the losses. You are completely right to distinguish between pecuniary losses—lower income—and broader psychic scars about identity, status, and purpose.

Both of those are first-order issues. Many people losing manufacturing jobs were in areas exposed to the China trade shock. Just so your listeners are aware, the China trade shock largely accompanied China’s accession to the WTO in 2001 and led to a huge contraction of U.S. manufacturing. About 4 million jobs were lost in the course of seven years. Not all of that was due to the China trade shock, but an important part of it was.

That is not a huge number of jobs on the scale of an economy of 155 or 160 million workers, but it was very geographically concentrated in certain industries and places. The people who lost work could not just go to other manufacturing work; it simply did not exist. Either they hung on, left the labor force, or joined typically much lower-paid service employment. While the pecuniary losses were significant, there is also the issue of identity. These workers usually did not have high levels of education or broadly portable skills. Recent work by my colleague Amy Finkelstein finds that manufacturing job loss during the China trade shock and NAFTA was associated with higher excess mortality among non-elderly men. That was not true for non-manufacturing job loss. These were high-wage jobs for non-college men with steady hours and long-term prospects. They were the anchor of a certain family structure, and nothing comparable was available. I do not mean to sound nostalgic; the data supports this.

Economists like to invoke the “second welfare theorem,” which says you first expand the pie and then divide it. It assumes you can just compensate people with money. I do not think that is true. You cannot give someone back their injured self-esteem by writing them a check. People do not love the notion of being compensated for losses; they would strongly prefer unions, minimum wages, and things that make their jobs pay well—not a consolation prize for what they lost.

Regarding your second question: was this inevitable and was it necessary? Inevitably, U.S. manufacturing employment in labor-intensive sectors would have declined eventually. These were legacy activities that would have faded over a couple of decades. However, they would not have cratered in the time period they did. In labor markets, it is not just how much change you want, but how fast you want to get there. The market has a natural ebb and flow; you can accommodate a 10 percentage point change over 10 years, but if it happens overnight, it is much more challenging.

Even if we accept the idea that we had to do this, we could have done it at a more gradual rate. The merchandise deficit as a share of GDP was enormous at that time, meaning manufacturing had to decline very rapidly. The labor market was simply not geared to change that quickly. We bungled it by not slowing it down or regulating China’s currency manipulation. We could have also had better compensatory policies; Trade Adjustment Assistance was very limited and directed toward training. We now know that wage insurance policies are more effective, but the belief at the time was that there was nothing to worry about.


We hope you’re enjoying the podcast! If you’re a paying subscriber, you can set up the premium feed on your favorite podcast app at writing.yaschamounk.com/listen. This will give you ad-free access to the full conversation, plus all full episodes and bonus episodes we have in the works! If you aren’t, you can set up the free, limited version of the feed—or, better still, support the podcast by becoming a subscriber today!

Set Up Podcast

If you have any questions or issues setting up the full podcast feed on a third-party app, please email leonora.barclay@persuasion.community


Finally, was it necessary? That is a much harder political calculation. It was difficult to foresee how things would turn out. It was reasonable to say that China was a rising power and the world’s most populous country, and we should help it open up. The fact that it became more autocratic and less tolerant of democratic norms was not something we could have easily forecast. If we had prevented China from joining the WTO, we might now be saying, if only we had let them in, they would not have turned into an adversary. While it is hard to blame people for not knowing the future, I believe that—conditional on the decision—we bungled the policy badly.

Mounk: Excellent answer to all four of my explicit and implicit questions. I am going to try and be more disciplined in the questions I ask you going forward. To pick up on one of the strands of our conversation: do you think that we have learned from the mistakes of the China shock in such a way that if we suddenly faced, completely hypothetically, a bunch of jobs disappearing because of a technological shock, we would be much smarter at thinking about how to minimize the social and downstream political costs?

Or do you think that it is just incredibly hard to do that, even with the lessons we might take from that shock? This might be in part because of questions of rational choice, where the people who have just lost their jobs are not necessarily the people who can best advocate for their own interests. It may also be because it requires significant redistribution, which is always hard to do, and because of all kinds of other structural obstacles that stand in the way of implementing even the smart lessons we may have drawn from those bad decisions.

Autor: I can give you a simple answer to the question: have we learned anything? The answer is no. Our political system has learned nothing. In fact, to the degree it has learned anything, it has learned the wrong lessons. Again, I speak of the United States in particular. The United States has eliminated the Trade Adjustment Assistance Program—the only thing we ever had in place to help people displaced by trade. It is no longer funded by Congress. That is a mind-blowing fact.

The only lessons we seem to have taken about trade policy are that we should across the board harm our neighbors and our adversaries simultaneously, and protect sectors that do not need protecting while failing to invest strategically in those that do. It is crazy to think we should be making tube socks or assembling iPhones in the United States; there is never going to be high-paid work in that, and we are just taxing ourselves by doing it.

On the other hand, we should be thinking very strategically about sectors like semiconductors, fusion, artificial intelligence, drones, electric vehicles, power generation, and robotics. Yet the United States is not doing that. Moreover, we are not big enough to contend with China on all those fronts alone. We used to have allies in Canada and in Europe. Collectively, we were an incredibly large economic strategic bloc, and yet we have fractured those relations.

Like so many things in the Trump era, it is exactly the right question followed by a terrible answer. I do not want to say no one has learned anything; I think both our political process and the economics profession have done some rethinking and concluded that we should not be so laissez-faire on trade. I agree that is the “hit me over the head with a sledgehammer” lesson. But in terms of what to do about it, I think our responses have been counterproductive, unwise, short-sighted, and harmful in the long run—both to the United States and to our allies.

Mounk: Obviously, a lot of the economic discussion now is about the effect that artificial intelligence is going to have, and you have written very interestingly about that. I want to get to that in a moment. But before we do, how would you characterize the pre-AI state of the American economy and the American job market more specifically? I do not know whether you want to address that by looking at what it looked like the day ChatGPT-3.5 was released, or what it looks like today.

Autor: I would say that from 2015 through the spring of 2020, the U.S. labor market was actually in great shape. Wage growth was robust, and productivity growth was pretty strong. There was a significant reduction in inequality among “normal mortal” earners—those between the 10th and 90th percentile. At the very top, things were still growing explosively, which one can have mixed feelings about, but in terms of the people I am most deeply concerned about—those without college degrees earning at or below the median—things were looking very good.

The U.S. labor market was in great condition. If you want to put it this way, the Trump labor market was a great labor market, especially for blue-collar workers. This provided a substantial basis of support for Trump. If you ask why so many Hispanic and Black voters supported him in 2020 and even more so in 2024, it is partly because things were going very well for American workers outside the elite.


Auf deutsch lesen 🇩🇪

Lire en français 🇫🇷


I do not think most Americans appreciate that only two countries have had economic miracles over the last 20 years: the United States and China. The notion of “American carnage” is misguided; productivity growth has been robust, unemployment has been low, and wage growth has been relatively strong. I do not mean to imply it was a utopia—there are many things to be unhappy about—but it was overall going very well.

Then, of course, we had the COVID-19 pandemic, which was a huge setback. However, the U.S. economy recovered more rapidly than almost any other country. We did have significant inflation, which was partly due to COVID and partly because the Biden administration overstimulated the economy. But that was coming down even before 2024, and we managed to land it without a recession, which is remarkable. One could call this the “immaculate recovery”—to recover from that type of inflation without a recession is almost without precedent.

We were already coming back into pretty good shape in 2024 and 2025 when Trump took office. Since then, we have been awash in incredible uncertainty regarding tariff policies and shifting energy sectors—whether oil, gas, solar, or coal. We are facing multiple wars of choice and the political persecution of our leading technology companies, universities, and elected officials.

I am not an expert in this specific area, but Nick Bloom has done a lot of work on the cost of uncertainty itself. How do you invest when you do not know what prices or tariffs will be, or whom you will be at war with? The U.S. economy and labor market have been remarkably robust in the face of all that, but job growth has slowed enormously. In fact, the jobs report from last month was quite negative. While those numbers fluctuate, the irony is that the United States did not recognize how good it had it. In responding to that misperception, it implemented policies that arguably made things much worse.

Mounk: It is striking that there was that gap between the data and the perception. One thing I am really struck by is that China, and to a lesser degree India and other populous nations, have had phenomenal economic growth over the last few decades. While that is obviously easier to do when you are coming from behind, it is a phenomenal success story.

Autor: China’s growth over the last four decades is miraculous, and not just for China. It is the creation of a global middle class for the first time. China’s growth has created prosperity, not just in China by bringing hundreds of millions of people out of poverty, but also in Central and South America and Sub-Saharan Africa. It has been the best event in world economic history for poverty reduction worldwide. It has created challenges, and I am not saying it is all upside, but the world has never seen anything like it.

Mounk: One of the impacts of that has been a significant reduction in global inequality. If you look at the global Gini coefficient, it has come down significantly because hundreds of millions of people have been lifted into the global middle class.

What is striking is that while the share of global GDP taken up by China—and to a lesser extent by India and other countries—has risen significantly, and Europe’s share has declined rapidly, the United States has actually held on. It has seen some decline in its share of global GDP, but given the background of that incredible economic miracle in China and significant progress in nations like India, it is astonishing how little America’s share has declined.

Yet people still feel very unsettled, negative, and pessimistic. What explains that disjunction? Is it just that once a nation becomes very affluent, it becomes much harder to generate growth that translates into a tangible improvement in life prospects? Is this a “disease of affluence” we will have forever? Does it have to do with political changes or a shifting technological environment? What explains that gap?

Autor: Okay, so I need to be clear: I am now just going to be giving you an uninformed opinion in the sense that I am an economist, not a sociologist or a philosopher. Why are people’s perceptions so negative given that there are so many outward indications that things are so positive?

I would say two things. One is, unfortunately, Americans are very ill-informed about where we stand relative to others. If they are told that America used to be great and is no longer great, that we are being taken advantage of by everyone else, that we are the laughing stock of the world, and that we have given up all our power, they begin to believe that things are very bad.

The other thing, which I think has more foundation, is that there is a ton of economic insecurity in the United States. I do not think this was true in the first three post-war decades—a period of really rapid economic growth combined with very robust, even growth for people at all education levels. That era saw rising living standards for everyone.

Today, the average in the United States is not very informative because very few people are near the average; the outcomes are so dispersed. I think many people—the majority of Americans who do not have four-year college degrees—felt that their horizons were shorter than they were 40 years ago. They do not see a secure pathway into the middle class, and it is not obvious that their children will do better than they did.

That is real. You could say the United States created a lot of wealth on average but did not distribute it very evenly. Some would argue that is how it got so wealthy—by not “squandering” it on redistribution or the welfare state. I do not personally believe that, though I could not refute it immediately. I think the United States could have used its prosperity differently to create more economic security, which would have resulted in less dissatisfaction.

In terms of the China trade shock, it certainly was not negative for everyone; it lowered many prices and may have been positive in the aggregate for the United States. However, the distributional consequences were so adverse that they made it very scarring. It would have been possible to use some of those resources more effectively; even if you cannot make people entirely whole, you certainly could have made them better off than they were without that assistance.

Mounk: I feel like we’ve been working up to the topic, and now I want to grab the bull by the horns. What kind of technology is artificial intelligence? When we think about artificial intelligence in the context of other historic technologies that have had a major economic impact, what do you think is the right comparator? What level of change and disruption are we talking about?

Autor: Well, let me try to describe what makes artificial intelligence distinct from other technologies. Prior to the computer, we were in eras of mechanization where we had better tools that could do amazing things like facilitate chemical reactions or pull plows. The computer was the first commercial symbolic processing tool—something that could take information, stored symbols, and instructions, and then act upon, process, and analyze that information. We never really had any tools for that other than our own minds, pens, and paper.

That gave us enormous power to take repetitive, complicated, and difficult tasks—everything from calculating space trajectories to playing chess—write them down as code, and have increasingly inexpensive machines do them with perfect accuracy and incredible speed. That was an amazing breakthrough.

From the 1980s to 2020, we were in this era of computerized automation. That was very displacing for people who did work that was often relatively expert but followed well-understood rules and procedures. It was very complementary to professionals who could take time-consuming parts of their work—retrieving information, looking things up, or calculating—and make those tasks incredibly efficient so they could focus on what they were really good at: making expert decisions. It was not especially helpful for blue-collar work, where it wasn’t very applicable.

However, for everything we computerized, we had to understand the rules of how to do it. We had to understand it formally so we could write down the procedure for a non-sentient, non-improvisational machine to follow. That turns out to be a huge limitation because, as the philosopher Michael Polanyi said, “We know more than we can tell.” There are many things we do without knowing exactly how we do them.

We don’t know the “code” to write a funny joke, make a persuasive argument, develop a hypothesis, or ride a bicycle. These are all things people do based on tacit understanding learned inductively. We never write down the rules.

AI overcomes that “Polanyi paradox” because it can learn things without us telling it. It can know more than it can tell us because it learns inductively from examples and data. It can learn from unstructured information and solve problems we don’t know how to solve. It can perform tasks we would think of as creative if done by another person. This moves technology into a whole new domain of cognitive and physical activity that machines simply couldn’t enter before.

It is qualitatively different from traditional computing. To use an analogy: if traditional computing is like an orchestral musician reading notes exactly in sync, AI is much more like a jazz musician who can solo, improvise, and extrapolate. That is incredibly powerful, and we are only beginning to figure out how to use it well.

One thing AI is not, however, is simply a better, cheaper, or faster version of something else. In many ways, AI is not as good at many tasks as traditional computing. You probably saw the story in the New York Times about not using AI to do your taxes; it’s not reliable. No one ever said, I wouldn’t use Excel for that problem, it might hallucinate. That was never a concern with traditional software.

Mounk: That is really fascinating. It makes me wonder about the terms in which we should think about the economic impact artificial intelligence is going to have. For good reasons, social scientists like to look to the past for guidance about what might be around the next historical corner, while being very careful about making projections. When a new technology arises, the natural thing to do is to look at what happened when other major technologies emerged. That may be an imperfect way of reasoning through the problem, but it is the best we have.

In the past, we have typically seen a phenomenon that economists call “job reinstatement.” This is where a machine or some form of automation takes away a bunch of jobs that can now be done more efficiently, but new roles are created in their wake. As we discussed, this can be deeply disruptive to the people directly affected; they may be too old or too set in a particular range of skills—or a way of life, like that of farmers or peasants—to really accommodate these new jobs. This can have deeply disruptive consequences. Yet, the children and the children’s children simply adjust to this new world and learn the skills necessary for new tasks, many of which might not have existed beforehand. Therefore, over time, we have not seen the phenomenon of mass-scale job loss.

One of the background conditions for that was what I call a “mental reservoir”—a reservoir of tasks that machines could not perform. First, this was because there was no way of automating cognitive tasks. Later, even when we could automate some cognitive tasks, we could only automate those that followed the steps of an algorithm in the way a computer can.

Now, for the first time, we have a machine that potentially rivals us in carrying out all of those cognitive tasks—as it does to some extent today, and likely will to a much larger extent in ten, five, or even one year. Is that going to make this metaphor of job reinstatement obsolete? Or do you think that is a timeless mechanism which will persist even as this new machine penetrates deeply into an area of human skill that, until now, only members of our species were able to carry out?

Autor: First, let me say that although we have been through multiple technological eras, not all the transitions are smooth or painless. The first Industrial Revolution was an era where valuable artisanal skills were wiped out. People who did textiles and weaving saw their expertise become valueless; they were replaced by indentured children and unmarried women working in what William Blake called the “dark satanic mills.” It took decades before new work began to appear that actually utilized the numeracy and literacy of the next generation of workers. There is no guarantee that it ever goes smoothly. Usually, even in the best case, there are winners and losers, much like the China trade shock where some expertise is devalued while new expertise is reinstated.

We have seen a lot of this over the last four decades. Computerization has been decidedly non-neutral for the welfare of different skill groups. It has been great for educated adults, but, on average, it has not been good for people without college degrees. It hollowed out the middle of higher-paying jobs in production, clerical work, and administrative support, and moved many people into services: food service, cleaning, security, transportation, and home healthcare. Although those services are socially valuable, that work is relatively inexpert. Most people can do it without much training or certification, meaning it won’t pay well because there is no scarcity of labor.

AI changes the game again, and there are two things we want to think about simultaneously. One is how this changes human comparative advantage. What will the machine make too cheap to meter? You wouldn’t want to do those tasks anymore because you can’t compete with a machine that does them instantly at almost zero cost. The second question is: what will it make more valuable because humans are really needed to do those remaining tasks?

We don’t have a very good answer to that yet. For now, we can say there is plenty of physical, hands-on work that will require labor for decades, even if we make tons of progress in robotics. I am very confident that there will be many people working as doctors in medicine, in education, and in the trades. In many settings, humans will continue to provide an intermediary layer between formal bodies of expertise and other people. We will look to other people to guide us in decision-making in every high-stakes domain.

That may mean there will be fewer people who are more expert, or in other cases, many more people who are less expert. Technology can be very bifurcating. Take ride-hailing services like Uber and Lyft. That really changed the occupation of taxi and chauffeur drivers, not just because it allowed more people to do it, but because being a taxi driver had two components: driving the car and knowing the routes. Ride-hailing meant you no longer needed to know the routes; you could drive in a city you had never visited immediately. That reduced the expertise requirements, simplified the work, and allowed many more people to do it. It simultaneously created new opportunities and unwelcome competition for incumbents.

In other cases, technology eliminates the simple part of the work and leaves the expert components remaining. Many people would say that in their professions, the part they actually have to do is much harder because the grunt work is done for you; you have to focus on diagnostic skills, software architecture, or solving hard problems in contracting.

We have to ask: where will we re-specialize? Which expert tasks will be de-skilled, and where will human expertise become more valuable because it is the central piece remaining to put everything together? That is a very hard question. It is not sufficient to say something is “exposed” to AI, with the implication being that it is at risk of shriveling up and dying. Exposure could mean it gets simpler and more people do it at lower pay, or it could mean it becomes more specialized because the technology does the easy part and the people who remain perform more abstract, increasingly demanding tasks.

We have seen this happen before. Accounting has gone from bookkeeping to planning and forecasting. In the professions, computers have taken away a lot of the simple work and made us focus on the high end. That is very hard to forecast.

There is a second component to this: will there simply be less work for people to do? I don’t want to be a Pollyanna and say that can’t happen. Wassily Leontief famously said that humans are like horses; they will soon be put out to pasture. Why aren’t horses being used anymore? They are just as productive as they ever were, but they cannot compete with an internal combustion engine. They are fundamentally too expensive; you have to maintain them, have a stable, and have land for them to graze on. There are cases where a factor of production simply cannot be competitive. Similarly, there is no wage at which a human could compete with a computer to do a set of calculations.

The amount of calories a human would have to burn just to do those calculations couldn’t support the food consumption required compared to a microprocessor. It is possible to have circumstances where the pure cost of hiring a person would not be justified when you could get a machine to do it. It is not impossible.

One indication that this could be occurring is that the labor share of national income is falling. It has been declining for a couple of decades, dropping from above 60% by about eight percentage points in the United States. While the majority of every dollar used to go to workers and the minority to capital, it is now getting closer to parity.

That is a possibility. I do not think we are going to enter a world where labor has no value or where we have no more labor scarcity. I think that is a long way away and unlikely. On the other hand, if labor’s share of national income falls by even a substantial number—say, 20 more percentage points—that is going to be pretty seismic.

Mounk: I have a few follow-ups on this. The first is about the example of horses, which is really interesting. The primary reason, you are saying, is that keeping and maintaining a horse is just very expensive, which is why the number of horses has gone down radically.

I think there may also be a second reason. I recently visited an Amazon warehouse, which is a fascinating thing to do. One of the things that becomes clear is that there are these very efficient robots which move produce around, install them, and bring them back out to the person who wants to pick them out when they are needed. Sometimes these machines malfunction in some small way because a piece of lint gets attached to their wheels, and a human needs to come and clean that up. That is very costly because, to avoid accidents, they have a special vest which makes all the robots around stop until the human is out of that area and they can resume their operation.

There is a significant cost to that human-robot interaction. An additional reason why it may be bad to have horses in the street is that there are cars around, and horses and cars do not mix very well. The handover cost from machine to human and back may be so high that even in tasks where humans have a small comparative advantage, it may be easier to push them out of the loop for that reason entirely.

The other point I wanted to make is about judgment. You are assuming that in areas like medicine, for example, human judgment will always be required. On many benchmarks, AI bots are now very close to the judgment of humans on those decisions. They do not yet have sufficiently large context windows to take into account all of a patient’s medical history, so we probably still want high-stakes decisions to be made by doctors. Quite plausibly, machines are going to have even more powerful models within a few years, so their acuity and judgment will be even higher. We are going to continue moving toward being able to solve those kinds of external constraints by giving them larger context windows and finding smarter ways for them to understand more about the patient. We already know that humans actually tend to trust AI bots quite highly. Why are we so sure that those most high-stakes medical decisions are not going to be made by machines eventually?

The real question I want to ask you, based on your academic expertise, is that it seems to me there is a world of Silicon Valley technologists who do not really understand the constraints of the real world and who think all the jobs are going to disappear tomorrow. That is deeply naive. On the other hand, I think there is a strain in parts of economics where people are looking at studies of productivity gains from using ChatGPT-3.5 in highly constrained circumstances two years ago and projecting forward from that. I think that tends to give you far too-low estimates.

If we think that for 50 or 100 years, there is a significant decline in the demand for high-skill labor, what happens then? If the world we are entering is perhaps one where there are some jobs that are highly specialized and new, and a good number of jobs that are not very specialized for which we now need people, that is going to be a pretty bad outcome for most. There is a little bit of the lawyer who no longer has to do the grunt work and just makes the decisions, and a little bit of the ride-hail worker who actually has been de-skilled.

If there is a small caste of people who are highly skilled and make these high-stakes decisions, and then there is a big pool of not-very-differentiated labor that can do these relatively low-skilled jobs, that could be enough to radically decrease the wages of a lot of people. It could lead to pretty deep economic disruptions. You are the economist who can model this, and I am just a political scientist who is speculating.

Autor: I do think you certainly raised the point that there may be many domains where we currently think we need humans involved, but ultimately we will not. Medical diagnosis might be one of them, where the costs of error are so high and you have machines that could be trained on millions of cases. Maybe they will just be better. I think in many cases they will work collaboratively. I do not think it will be full automation for quite a while.

However, let me take your question on its premise. It is quite possible there are areas where we have labor now and we just will not want it. For a while, ATMs were complementary to bank tellers; bank teller employment actually grew. Banks began branching because they could do it at a lower cost. They built more branches and hired tellers not just as cashiers, but as salespeople to introduce you to other products like loans and investments. Now, teller employment is declining. People have decided they can just do it all online and do not need a person. Most of the people who still stand in line to see bank tellers are elderly individuals who are used to that way of doing things.

There may be a period where there is collaboration, followed eventually by encroachment. The other side of this is that if all this is happening, we are getting very wealthy all of a sudden. That means we are doing a lot of stuff very cheaply; there is a productivity side implied by this.

Why did this happen to horses so fast, while it has not happened to people so far? There are a few different reasons. One is that people are much more flexible than horses. We have a capacity to educate ourselves and do many different things. As my co-author Anna Salomons likes to say, people are not “one-trick ponies.” Two, of course, people own capital. We own the machines, whereas horses do not, so we get the fruits of those productivity benefits. The third reason is that we vote. Democracy buffers the effect and shapes how these things bear out.

When you say we could all end up jobless yet with high productivity, I am reminded of what the science fiction writer Ted Chiang said. He noted that fears of AI and automation are not really fears of the technology; they are fears of capitalism. We are afraid of what the economic system will do once we have those capabilities. If people do not need to hire workers, then the fruits of that productivity will not be distributed evenly.

That is a very significant concern. The biggest concern I have about the technology, at least from the labor market perspective, is the threat it implies for democratic function. In my view, the labor market is one of our most important social institutions. It works hand-in-glove with democratic institutions because most people are considered both contributors—through their labor and taxes—and claimants, through their retirement, education, and social insurance.

If labor were suddenly devalued, most people would be claimants without being seen as contributors. The political economy of that is a nightmare. To say we will just have a few rich people, tax them, and redistribute the rest has not historically tended to work out.

I do not see this as the most likely scenario, but it is sufficiently plausible that I would not dismiss it. People are very polarized on this. There are many “doomers” and many utopians. I think we should recognize that a range of possibilities is plausible—some quite good, some quite bad. Many of them will co-occur; we will make some terrible mistakes even as we see big gains. We should be adopting policies that help us ensure a better transition if such a transition is needed.

In the rest of this conversation, Yascha and David discuss what a world without employment would really look like and how we can harness technology to complement jobs for humans rather than destroy them. This part of the conversation is reserved for paying subscribers…

Listen to this episode with a 7-day free trial

Subscribe to Yascha Mounk to listen to this post and get 7 days of free access to the full post archives.