scorecard
  1. Home
  2. policy
  3. economy
  4. news
  5. 'This is the last opportunity for us to wake up': A leading economist warns we're headed for an AI-driven cataclysm

'This is the last opportunity for us to wake up': A leading economist warns we're headed for an AI-driven cataclysm

Aki Ito   

'This is the last opportunity for us to wake up': A leading economist warns we're headed for an AI-driven cataclysm

How a leading economist learned to start worrying and fear artificial intelligence

Sure, there have been a few nutjobs out there who think AI will wipe out the human race. But ever since ChatGPT's explosive emergence last winter, the bigger concern for most of us has been whether these tools will soon write, code, analyze, brainstorm, compose, design, and illustrate us out of our jobs. To that, Silicon Valley and corporate America have been curiously united in their optimism. Yes, a few people might lose out, they say. But there's no need to panic. AI is going to make us more productive, and that will be great for society. Ultimately, technology always is.

As a reporter who's written about technology and the economy for years, I too subscribed to the prevailing optimism. After all, it was backed by a surprising consensus among economists, who normally can't agree on something as fundamental as what money is. For half a century, economists have worshiped technology as an unambiguous force for good. Normally, the "dismal science" argues, giving one person a bigger slice of the economic pie requires giving a smaller slice to the sucker next door. But technology, economists believed, was different. Invent the steam engine or the automobile or TikTok, and poof! Like magic, the pie gets bigger, allowing everyone to enjoy a bigger slice.

"Economists viewed technological change as this amazing thing," says Katya Klinova, the head of AI, labor, and the economy at the nonprofit Partnership on AI. "How much of it do we need? As much as possible. When? Yesterday. Where? Everywhere." To resist technology was to invite stagnation, poverty, darkness. Countless economic models, as well as all of modern history, seemed to prove a simple and irrefutable equation: technology = prosperity for everyone.

There's just one problem with that formulation: It's turning out to be wrong. And the economist who's doing the most to sound the alarm — the heretic who argues that the current trajectory of AI is far more likely to hurt us rather than help us — is perhaps the world's leading expert on technology's effects on the economy.

Daron Acemoglu, an economist at MIT, is so prolific and respected that he's long been viewed as a leading candidate for the Nobel prize in economics. He used to believe in the conventional wisdom, that technology is always a force for economic good. But now, with his longtime collaborator Simon Johnson, Acemoglu has written a 546-page treatise that demolishes the Church of Technology, demonstrating how innovation often winds up being harmful to society. In their book "Power and Progress," Acemoglu and Johnson showcase a series of major inventions over the course of the past 1,000 years that, contrary to what we've been told, did nothing to improve, and sometimes even worsened, the lives of most people. And in the periods when big technological breakthroughs did lead to widespread good — the examples that today's AI optimists cite — it was only because ruling elites were forced to share the gains of innovation widely, rather than keeping the profits and power for themselves. It was the fight over technology, not just technology on its own, that wound up benefiting society.

"The broad-based prosperity of the past was not the result of any automatic, guaranteed gains of technological progress," Acemoglu and Johnson write. "We are beneficiaries of progress, mainly because our predecessors made the progress work for more people."

Today, in this moment of peak AI, which path are we on? The terrific one, where we all benefit from these new tools? Or the terrible one, where most of us lose out? Over the course of three conversations this summer, Acemoglu told me he's worried we're currently hurtling down a road that will end in catastrophe. All around him, he sees a torrent of warning signs — the kind that, in the past, wound up favoring the few over the many. Power concentrated in the hands of a handful of tech behemoths. Technologists, bosses, and researchers focused on replacing human workers instead of empowering them. An obsession with worker surveillance. Record-low unionization. Weakened democracies. What Acemoglu's research shows — what history tells us — is that tech-driven dystopias aren't some sci-fi rarity. They're actually far more common than anyone has realized.

"There's a fair likelihood that if we don't do a course correction, we're going to have a truly two-tier system," Acemoglu told me. "A small number of people are going to be on top — they're going to design and use those technologies — and a very large number of people will only have marginal jobs, or not very meaningful jobs." The result, he fears, is a future of lower wages for most of us.

Acemoglu shares these dire warnings not to urge workers to resist AI altogether, nor to resign us to counting down the years to our economic doom. He sees the possibility of a beneficial outcome for AI — "the technology we have in our hands has all the capabilities to bring lots of good" — but only if workers, policymakers, researchers, and maybe even a few high-minded tech moguls make it so. Given how rapidly ChatGPT has spread throughout the workplace — 81% of large companies in one survey said they're already using AI to replace repetitive work — Acemoglu is urging society to act quickly. And his first task is a steep one: deprogramming all of us from what he calls the "blind techno-optimism" espoused by the "modern oligarchy."

"This," he told me, "is the last opportunity for us to wake up."


Acemoglu, 56, lives with his wife and two sons in a quiet, affluent suburb of Boston. But he was born 5,000 miles away in Istanbul, to a country mired in chaos. When he was 3, the military seized control of the government and his father, a left-leaning professor who feared the family's home would be raided, burned his books. The economy crumbled under the weight of triple-digit inflation, crushing debt, and high unemployment. When Acemoglu was 13, the military detained and tried hundreds of thousands of people, torturing and executing many. Watching the violence and poverty all around him, Acemoglu started to wonder about the relationship between dictatorships and economic growth — a question he wouldn't be able to study freely if he stayed in Turkey. At 19, he left to attend college in the UK. By the freakishly young age of 25, he completed his doctorate in economics at the London School of Economics.

Moving to Boston to teach at MIT, Acemoglu was quick to make waves in his chosen field. To this day his most cited paper, written with Johnson and another longtime collaborator, James Robinson, tackles the question he pondered as a teenager: Do democratic countries develop better economies than dictatorships? It's a huge question — one that's hard to answer, because it could be that poverty leads to dictatorship, not the other way around. So Acemoglu and his coauthors employed a clever workaround. They looked at European colonies with high mortality rates, where history showed that power remained concentrated in the hands of the few settlers willing to brave death and disease, versus colonies with low mortality rates, where a larger influx of settlers pushed for property rights and political rights that checked the power of the state. The conclusion: Colonies that developed what they came to call "inclusive" institutions — ones that encouraged investment and enforced the rule of law — ended up richer than their authoritarian neighbors. In their ambitious and sprawling book, "Why Nations Fail," Acemoglu and Robinson rejected factors like culture, weather, and geography as things that made some countries rich and others poor. The only factor that really mattered was democracy.

The book was an unexpected bestseller, and economists hailed it as paradigm-shifting. But Acemoglu was also pursuing a different line of research that had long fascinated him: technological progress. Like almost all of his colleagues, he started off as an unabashed techno-optimist. In 2008, he published a textbook for graduate students that endorsed the technology-is-always-good orthodoxy. "I was following the canon of economic models, and in all of these models, technological change is the main mover of GDP per capita and wages," Acemoglu told me. "I did not question them."

But as he thought about it more, he started to wonder whether there might be more to the story. The first turning point came in a paper he worked on with the economist David Autor. In it was a striking chart that plotted the earnings of American men over five decades, adjusted for inflation. During the 1960s and early 1970s, everyone's wages rose in tandem, regardless of education. But then, around 1980, the wages of those with advanced degrees began to soar, while the wages of high-school graduates and dropouts plunged. Something was making the lives of less-educated Americans demonstrably worse. Was that something technology?

Acemoglu had a hunch that it was. With Pascual Restrepo, one of his students at the time, he started thinking of automation as something that does two opposite things simultaneously: It steals tasks from humans, while also creating new tasks for humans. How workers ultimately fare, he and Restrepo theorized, depends in large part on the balance of those two actions. When the newly created tasks offset the stolen tasks, workers do fine: They can shuffle into new jobs that often pay better than their old ones. But when the stolen tasks outpace the new ones, displaced workers have nowhere to go. In later empirical work, Acemoglu and Restrepo showed that that was exactly what had happened. Over the four decades following World War II, the two kinds of tasks balanced each other out. But over the next three decades, stolen tasks outpaced the new tasks by a wide margin. In short, automation went both ways. Sometimes it was good, and sometimes it was bad.

It was the bad part that economists were still unconvinced about. So Acemoglu and Restrepo, casting around for more empirical evidence, zeroed in on robots. What they found was stunning: Since 1990, the introduction of every additional robot reduced employment by approximately six humans, while measurably lowering wages. "That was an eye-opener," Acemoglu told me. "People thought it would not be possible to have such negative effects from robots."

Many economists, clinging to the technological orthodoxy, dismissed the effects of robots on human workers as a "transitory phenomenon." In the end, they insisted, technology would prove to be good for everyone. But Acemoglu found that viewpoint unsatisfying. Could you really call something that had been going on for three or four decades "transitory"? By his calculations, robots had thrown more than half a million Americans out of work. Perhaps, in the long run, the benefits of technology would eventually reach most people. But as the economist John Maynard Keynes once quipped, in the long run, we're all dead.


So Acemoglu set out to study the long run. First, he and Johnson scoured the course of Western history to see whether there were other times when technology failed to deliver on its promise. Was the recent era of automation, as many economists assumed, an anomaly?

It wasn't, Acemoglu and Johnson found. Take, for instance, medieval times, a period commonly dismissed as a technological wasteland. But the Middle Ages actually saw a series of innovations that included heavy wheeled plows, mechanical clocks, spinning wheels, smarter crop-rotation techniques, the widespread adoption of wheelbarrows, and a greater use of horses. These advancements made farming much more productive. But the reason we remember the period as the Dark Ages is precisely because the gains never reached the peasants who were doing the actual work. Despite all the technological advances, they toiled for longer hours, grew increasingly malnourished, and most likely lived shorter lives. The surpluses created by the new technology went almost exclusively to the elites who sat at the top of society: the clergy, who used their newfound wealth to build soaring cathedrals and consolidate their power.

Or consider the Industrial Revolution, which techno-optimists gleefully point to as Exhibit A of the invariable benefit of innovation. The first, long stretch of the Industrial Revolution was actually disastrous for workers. Technology that mechanized spinning and weaving destroyed the livelihoods of skilled artisans, handing textile jobs to unskilled women and children who commanded lower wages and virtually no bargaining power. People crowding into the cities for factory jobs lived next to cesspools of human waste, breathed coal-polluted air, and were defenseless against epidemics like cholera and tuberculosis that wiped out their families. They were also forced to work longer hours while real incomes stagnated. "I have traversed the seat of war in the peninsula," Lord Byron lamented to the House of Lords in 1812. "I have been in some of the most oppressed provinces of Turkey; but never, under the most despotic of infidel governments, did I behold such squalid wretchedness as I have seen since my return, in the very heart of a Christian country."

If the average person didn't benefit, where did all the extra wealth generated by the new machines go? Once again, it was hoarded by the elites: the industrialists. "Normally, technology gets co-opted and controlled by a pretty small number of people who use it primarily to their own benefits," Johnson told me. "That is the big lesson from human history."

Acemoglu and Johnson recognized that technology hasn't always been bad: At times, they found, it's been nothing short of miraculous. In England, during the second phase of the Industrial Revolution, real wages soared by 123%. The average working day declined to nine hours, child labor was curbed, and life expectancy rose. In the United States, during the postwar boom from 1949 to 1973, real wages grew by almost 3% a year, creating a vibrant and stable middle class. "There has never been, as far as anyone knows, another epoch of such rapid and shared prosperity," Acemoglu and Johnson write, going all the way back to the Ancient Greeks and Romans. It's episodes like these that made economists believe so fervently in the power of technology.

So what separates the good technological times from the bad? That's the central question that Acemoglu and Johnson tackle in "Power and Progress." Two factors, they say, determine the outcome of a new technology. The first is the nature of the technology itself — whether it creates enough new tasks for workers to offset the tasks it takes away. The first phase of the Industrial Revolution, they argue, was dominated by textile machines that replaced skilled spinners and weavers without creating enough new work for them to pursue, condemning them to unskilled gigs with lower wages and worse conditions. In the second phase of the Industrial Revolution, by contrast, steam-powered locomotives displaced stagecoach drivers — but they also created a host of new jobs for engineers, construction workers, ticket sellers, porters, and the managers who supervised them all. These were often highly skilled and highly paid jobs. And by lowering the cost of transportation, the steam engine also helped expand sectors like the metal-smelting industry and retail trade, creating jobs in those areas as well.

"What's special about AI is its speed," Acemoglu says. "It's much faster than past technologies. It's pervasive. It's going to be applied pretty much in every sector. And it's very flexible."

The second factor that determines the outcome of new technologies is the prevailing balance of power between workers and their employers. Without enough bargaining power, Acemoglu and Johnson argue, workers are unable to force their bosses to share the wealth that new technologies generate. And what determines the degree of bargaining power is closely related to democracy. Electoral reforms — kickstarted by the working-class Chartist movement in 1830s Britain — were central to the Industrial Revolution transforming from bad to good. As more men won the right to vote, Parliament became more responsive to the needs of the broader public, passing laws to improve sanitation, crack down on child labor, and legalize trade unions. The growth of organized labor, in turn, laid the groundwork for workers to extract higher wages and better working conditions from their employers in the wake of technological innovations, along with guarantees of retraining when new machines took over their old jobs.

In normal times, such insights might feel purely academic — just another debate over how to interpret the past. But there's one point that both Acemoglu and the tech elite he criticizes agree on: We're in the midst of another technological revolution today with AI. "What's special about AI is its speed," Acemoglu told me. "It's much faster than past technologies. It's pervasive. It's going to be applied pretty much in every sector. And it's very flexible. All of this means that what we're doing right now with AI may not be the right thing — and if it's not the right thing, if it's a damaging direction, it can spread very fast and become dominant. So I think those are big stakes."


Acemoglu acknowledges that his views remain far from the consensus in his profession. But there are indications that his thinking is starting to have a broader impact in the emerging battle over AI. In June, Gita Gopinath, who is second in command at the International Monetary Fund, gave a speech urging the world to regulate AI in a way that would benefit society, citing Acemoglu by name. Klinova, at the Partnership on AI, told me that people high up at the leading AI labs are reading and discussing his work. And Paul Romer, who won the Nobel prize in 2018 for work that showed just how critical innovation is for economic growth, says he's gone through his own change in thinking that mirrors Acemoglu's.

"It was wishful thinking by economists, including me, who wanted to believe that things would naturally turn out well," Romer told me. "What's become more and more clear to me is that that's just not a given. It's blindingly obvious, ex post facto, that there are many forms of technology that can do great harm, and also many forms that can be enormously beneficial. The trick is to have some entity that acts on behalf of society as a whole that says: Let's do the ones that are beneficial, let's not do the ones that are harmful."

Romer praises Acemoglu for challenging the conventional wisdom. "I really admire him, because it's easy to be afraid of getting too far outside the consensus," he says. "Daron is courageous for being willing to try new ideas and pursue them without trying to figure out, where's the crowd? There's too much herding around a narrow set of possible views, and we've really got to keep open to exploring other possibilities."

Early this year, a few weeks before the rest of us, a research initiative organized by Microsoft gave Acemoglu early access to GPT-4. As he played around with it, he was amazed by the responses he got from the bot. "Every time I had a conversation with GPT-4 I was so impressed that at the end I said, 'Thank you,'" he says, laughing. "It's certainly beyond what I would have thought would be feasible a year ago. I think it shows great potential in doing a bunch of things."

But the early experimentation with AI also introduced him to its shortcomings. He doesn't think we're anywhere close to the point where software will be able to do everything humans can — a state that computer scientists call artificial general intelligence. As a result, he and Johnson don't foresee a future of mass unemployment. People will still be working, but at lower wages. "What we're concerned about is that the skills of large numbers of workers will be much less valuable," he told me. "So their incomes will not keep up."

Acemoglu's interest in AI predates the explosion of ChatGPT by many years. That's in part thanks to his wife, Asu Ozdaglar, who heads the electrical engineering and computer science department at MIT. Through her, he received an early education in machine learning, which was making it possible for computers to complete a wider range of tasks. As he dug deeper into automation, he began to wonder about its effects not just on factory jobs, but on office workers. "Robots are important, but how many blue-collar workers do we have left?" he told me. "If you have a technology that automates knowledge work, white-collar work, clerical work, that's going to be much more important for this next stage of automation."

In theory, it's possible that automation will end up being a net good for white-collar workers. But right now, Acemoglu is worried it will end up being a net bad, because society currently doesn't display the conditions necessary to ensure that new technologies benefit everyone. First, thanks to a decadeslong assault on organized labor, only 10% of the working population is unionized — a record low. Without bargaining power, workers won't get a say in how AI tools are implemented on the job, or who shares in the wealth they create. And second, years of misinformation have weakened democratic institutions — a trend that's likely to get worse in the age of deep fakes.

Moreover, Acemoglu is worried that AI isn't creating enough new tasks to offset the ones it's taking away. In a recent study, he found that the companies that hired more AI specialists over the past decade went on to hire fewer people overall. That suggests that even before the ChatGPT era, employers were using AI to replace their human workers with software, rather than using it to make humans more productive – just as they had with earlier forms of digital technologies. Companies, of course, are always eager to trim costs and goose short-term profits. But Acemoglu also blames the field of AI research for the emphasis on replacing workers. Computer scientists, he notes, judge their AI creations by seeing whether their programs can achieve "human parity" — completing certain tasks as well as people.

"It's become second nature to people in the industry and in the broader ecosystem to judge these new technologies in how well they do in being humanlike," he told me. "That creates a very natural pathway to automation and replicating what humans do — and often not enough in how they can be most useful for humans with very different skills" than computers.

Acemoglu argues that building tools that are useful to human workers, instead of tools that will replace them, would benefit not only workers but their employers as well. Why focus so much energy on doing something humans can already do reasonably well, when AI could instead help us do what we never could before? It's a message that Erik Brynjolfsson, another prominent economist studying technological change, has been pushing for a decade now. "It would have been lame if someone had set out to make a car with feet and legs that was humanlike," Brynjolfsson told me. "That would have been a pretty slow-moving car." Building AI with the goal of imitating humans similarly fails to realize the true potential of the technology.

"The future is going to be largely about knowledge work," Acemoglu says. "Generative AI could be one of the tools that make workers much more productive. That's a great promise. There's a high road here where you can actually increase productivity, make profits, as well as contribute to social good — if you find a way to use this technology as a tool that empowers workers."


In March, Acemoglu signed a controversial open letter calling on AI labs to pause the training of their systems for at least six months. He didn't think companies would adopt the moratorium, and he disagreed with the letter's emphasis on the existential risk that AI poses to humanity. But he joined the list of more than a thousand other signatories anyway — a group that included the AI scientist Yoshua Bengio, the historian Yuval Noah Harari, the former presidential candidate Andrew Yang, and, strangely, Elon Musk. "I thought it was remarkable in bringing together an amazing cross-section of very different people who were articulating concerns about the direction of tech," Acemoglu told me. "High-profile efforts to say, 'Look, there might be something wrong with the direction of change, and we should take a look and think about regulation' — that's important."

When society is ready to start talking about specific ways to ensure that AI leads to shared prosperity, Acemoglu and Johnson devote an entire chapter at the end of their book to what they view as promising solutions. Among them: Taxing wages less, and software more, so companies won't be incentivized to replace their workers with technology. Fostering new organizations that advocate the needs of workers in the age of AI, the way Greenpeace pushes for climate activism. Repealing Section 230 of the Communications Decency Act, to force internet companies to stop promoting the kind of misinformation that hurts the democratic process. Creating federal subsidies for technology that complements workers instead of replacing them. And, most broadly, breaking up Big Tech to foster greater competition and innovation.

I sensed an underlying discomfort among economists at the prospect of messing with how technology unfolds in the marketplace.

Economists — at least the ones who aren't die-hard conservatives — don't object in general to Acemoglu's proposals to increase the bargaining power of workers. But many struggle with the idea of trying to steer AI research and implementation in a direction that's beneficial for workers. Some question whether it's even possible to predict which technologies will create enough new tasks to offset the ones they replace. But in my private conversations with economists, I've also sensed an underlying discomfort at the prospect of messing with how technology unfolds in the marketplace. Since 1800, when the Industrial Revolution was first taking hold in the US, GDP per capita — the most common measure of living standards — has grown more than twentyfold. The invisible hand of technology, most economists continue to believe, will ultimately benefit everyone, if left to its own devices.

I used to think that way, too. A decade ago, when I first began reporting on the likely effects of machine learning, the consensus was that careers like mine — ones that require a significant measure of creativity and social intelligence — were still safe. In recent months, even as it became clear how well ChatGPT can write, I kept reassuring myself with the conventional wisdom. AI is going to make us more productive, and that will be great for society. Now, after reviewing Acemoglu's research, I've been hearing a new mantra in my head: We're all fucked.

That's not the takeaway Acemoglu intended. In our conversations, he told me over and over that we're not powerless in the face of the dystopian future he foresees — that we have the ability to steer the way AI unfolds. Yes, that will require passing a laundry list of huge policies, in the face of a tech lobby with unlimited resources, through a dysfunctional Congress and a deeply pro-business Supreme Court, amid a public fed a digital firehose of increasingly brazen lies. And yes, there are days when he doesn't feel all that great about our chances either.

"I realize this is a very, very tall order," Acemoglu told me. But you know whose chances looked even grimmer? Workers in England during the mid-19th century, who endured almost 100 years of a tech-driven dystopia. At the time, few had the right to vote, let alone to unionize. The Chartists who demanded universal male suffrage were jailed. The Luddites who broke the textile machines that displaced them were exiled to Australia or hanged. And yet they recognized that they deserved more, and they fought for the kinds of rights that translated into higher wages and a better life for them and, two centuries later, for us. Had they not bothered, the march of technology would have turned out very differently.

"We have greatly benefited from technology, but there's nothing automatic about that," Acemoglu told me. "It could have gone in a very bad direction had it not been for institutional, regulatory, and technological adjustments. That's why this is a momentous period: because there are similar choices that need to be made today. The conclusion to be drawn is not that technology is workers' enemy. It's that we need to make sure we end up with directions of technology that are more conducive to wage growth and shared prosperity." That's why Acemoglu dedicated "Power and Progress" not only to his wife but to his two sons. History may point to how destructive AI is likely to be. But it doesn't have to repeat itself.

"Our book is an analysis," he told me. "But it also encourages people to be involved for a better future. I wrote it for the next generation, with the hope that it will get better."


Aki Ito is a senior correspondent at Insider.



Popular Right Now



Advertisement