sign up log in
Want to go ad-free? Find out how, here.

Raghuram Rajan considers whether the future with AI is one of widespread job losses - and how governments are likely to respond

Technology / opinion
Raghuram Rajan considers whether the future with AI is one of widespread job losses - and how governments are likely to respond
artificial intelligence

The small equity-research shop Citrini recently sent a panic through financial markets when it outlined a scenario in which AI ends most white-collar employment by 2028, with dire consequences for the broader economy. But this forecast is surely too pessimistic in some respects. Outside a few sectors, like software, frictions to adoption and sheer inertia will probably slow the pace of change. This has always been the case. For example, although automated telephone exchanges were possible in the 1920s, the last human telephone operator in the United States was not replaced until the 1980s.

Moreover, the technology itself is always only one variable. There also must be processes and structures around it to assure customers of reliable service. This is where incumbents have an advantage over challengers, even if they do not use the latest technology.

And even if incumbents are displaced, the new opportunities created by AI-induced cost reductions and productivity enhancements need not lead only to more AI. They may also require the work of humans – as with the internet and the rise of influencers.

Still, in some ways, the Citrini post is not pessimistic enough. Even setting aside the possibility that we might all become slaves to some AI overlord, the broader economic outcomes depend on how good AI gets and how fast; the pace of adoption by users; who profits from it; and how society reacts. Given all these variables, some extreme scenarios are indeed conceivable.

Consider, for example, a future in which a few differentiated platforms (say, Anthropic or Meta) reach a level of generalized AI that allows them to outpace the competition and steadily charge high prices to user firms. These dominant platforms would generate enormous profits, augmenting the incomes of their employees (who will be few, because AI will cull their ranks) and their shareholders. At the same time, the many firms relying on their services would be willing to pay, because AI would raise their own productivity, allowing them to shed more white-collar workers.

These unemployed workers would then look for work in adjacent industries where AI has not yet rendered their skills useless. But if those jobs are few, they will join lines for work as gardeners, waiters, and shop assistants, further depressing wages for these occupations. Assuming that AI displaces cognitive tasks before skilled physical ones, machinists, plumbers, and masons may still have work until robots become sophisticated enough. But over time, competition for those jobs will also increase as white-collar workers retrain. The pain will spread, and only the AI platforms and their investors will benefit. Or will they?

Before answering that, consider another “competitive” scenario in which no platforms “win” because there is little differentiation between ChatGPT 33.2, Gemini 25, and all the others. Although this scenario may still be devastating for white-collar jobs, prices for AI will be low, and the productivity benefits will flow through the economy, as will the resulting profits. Spared from expending enormous sums on AI, user firms could cut prices and expand production to meet the increased demand, implying more jobs elsewhere. There would be far less pain than in the first scenario, because lower-priced goods and services would allow pre-existing worker savings to go further.

Not only do current trends suggest that this second scenario is more likely than an AI oligopoly, but the government could take steps to ensure that it materializes, for example, through AI price regulations or a refusal to protect AI model builders from those who copy them. Would-be AI oligarchs should not assume that society will defend their enormous profits even as their products cause widespread job losses and hardship.

Of course, AI incumbents will lobby aggressively, corrupting some legislators to block regulation. They will mount public campaigns, using their many channels of influence to argue (not entirely incorrectly) that regulation will be ham-handed, harming efficiency and innovation while benefiting geopolitical rivals. But if the AI-induced pain is indeed widespread, the political impetus for intervention will remain strong.

Even if the state fails to ensure competitive AI prices, it can tax oligopolistic AI providers, their employees, and their shareholders to compensate the affected. The difficulty here lies in targeting. How do you identify those with supernormal profits from AI? How do you support those harmed, given how hard it has been to assist trade-affected workers in the past? And how do you distinguish between a technologically displaced worker and a worker laid off because of adverse business conditions or incompetence?

To avoid some of these questions, there will probably be a push for generous unemployment support regardless of the cause – a first step toward an eventual universal basic income. But this raises another problem, because even if fiscally strapped governments can raise sufficient revenues, there will still be many jobs that require human workers. Overly generous unemployment benefits therefore will push up the wages employers have to offer to coax workers out of unemployment, further reducing job creation.

Ultimately, there are no easy public responses to the problem of large-scale but not universal unemployment. Societies will have to experiment creatively, improving the safety net somewhat, while encouraging businesses to create jobs and reskill workers where possible. At the same time, if any of the AI platforms racing to achieve a near-monopoly does reach its goal, government policy reactions will almost certainly impair its profits. How, then, will these companies’ massive and still-growing debts be serviced? Will a financial crisis follow?

The best we can hope for is a Goldilocks scenario where the AI rollout is not so fast that workers cannot learn how to augment their jobs with AI, rather than being displaced; and where the AI industry is not too oligopolistic, so that the benefits accrue to society more broadly. Imaginative commentaries like the Citrini post force us to think about what might happen if the AI story turns out differently. Now is the time to map out the possible scenarios and start preparing for them.


Raghuram G. Rajan, a former governor of the Reserve Bank of India and chief economist of the International Monetary Fund, is Professor of Finance at the University of Chicago Booth School of Business and the co-author (with Rohit Lamba) of Breaking the Mold: India’s Untraveled Path to Prosperity (Princeton University Press, May 2024). Akhil Rajan also contributed to this commentary. Copyright: Project Syndicate, 2026, published here with permission.

We welcome your comments below. If you are not already registered, please register to comment

Remember we welcome robust, respectful and insightful debate. We don't welcome abusive or defamatory comments and will de-register those repeatedly making such comments. Our current comment policy is here.

18 Comments

The real work still needs to be done.....for how many will be the ultimate question should AI progress as anticipated.

Up
0

All the major AI initiatives I see have significant reductions in FTE as realized benefit.

Group CEO talking 35% reduction, its a very big company.

These are not entry level jobs, and the same jobs will go at competitors as well.

Local CEO / CTO have FTE reduction KPI targets

Major centers like Auckland and Wellington will suffer these losses the most.

Likely to start seeing tax revenue impacts in 2027.   Without growing taxes we will need to start cutting services.

 

Up
1

As quantums appear the topic this morning it begs the question how much work needs to be done for how many and crucially how are the limited resources allocated.

AI may have an unpalatable answer to those questions.

Up
1

Its interesting, that human survival is really determined by the lower levels of Maslow's hierarchy of needs.

The lower levels keep you alive but the higher levels keep you wanting to be alive.   AI will probably ensure that the lower levels are available for every human, but at what cost to our sense of purpose as both individuals and a species.

We have built incredibly complex institutions on top of this framework, will they lose funding and technically no longer be required.   Universities can charge large fee's based on your potential income after graduation, in the new world AI will be able to tutor you in any topic for free, but you will be unlikely to be able to charge anyone for your learning.   Uni's will become research institutions only?

Lets look 25 years into the future.

  • What does democracy achieve in a world where human intelligence is neither required nor takes part in commerce?
  • Would AI listen to democratic leaders decisions, why would it need to? Can it be made to?
  • Unions are no longer required unless its perhaps a union for humans...
  • Even governments will no longer require many human bureaucrats.
  • Will we accept being judged by AI in a court of Law? even if its a human judge being assisted by AI reasoning?
  • How will we divide scare resources if every person is paid the same universal benefit? eat your gruel and be happy.
  • Once we get dependent on AI running the world, what happens once the power goes out in a massive earthquake or war situation where undersea cables are cut on purpose.
  • What does war look like in this world?  AI models seem very willing to use nukes in wargaming situations.
  • How is current debt repaid in a world where humans can no longer effectively sell intelligence.

 

I have seen what AI can achieve and its transformational compared to a world where human intelligence is expensive and limited,   we will be able to accelerate technology at an amazing speed.   I am not against AI, I just ask what are the wider consequences of this on society. 

When do we start to have this conversation as a society?

 

Up
2

My questions are more basic....how many humans are needed to ensure the progression of AI...when AI is increasingly framing our questions?

Up
0

AI is now writing the next release of its own codebase, not many humans are now required.

Up
2

".. not many humans are now required."

But a system is needed to maintain those deemed required....so the number is not zero, and probably never can be...Id suggest it certainly isnt 8 billion.

Up
1

When do we start to have this conversation as a society?

When do humans pause rather than push whatever envelope is available to them?

Usually we do the thinking after the event. Oh, and we're usually pretty terrible at predicting the future also 

Up
4

When do you think we should have paused? Before the computer? Before the washing machine? Before electricity? Before fire? PDK said the latter. Actually where is he today, he’s normally all over this, “it’s not AI taking the jobs it’s entropy”

Up
0

Interesting counterfactual would be:what if we had avoided WW2? Nuclear technology would have been much slower to arrive, the nuclear arms race led to the start of the Internet, which then triggered the market for phones and social media. Cold war also led to space race and ICBM technology. No WW2 and we might still be in the heavy industrial era. Pollution might be worse, but maybe no annoying phones at dinner time!

Up
1

I suspect AI and robotics will take almost every current job eventually - perhaps 100 years before you can order a house from a robot and a robot builder builds it from materials produced and transported by robots.
Hard to know if we’ll have new types of jobs by then or all be living off the state. Maybe the robots all pay tax. 
Of course everything should be much cheaper by then as no labour cost, just energy cost. And for energy the robots can build a new nuclear plant as needed. 
At some stage they’ll work out we’re not needed…

I suspect AI is just one (quite large) stepping stone along this journey, much like computers and washing machines and cars and even fire. 
None of this is anything new, people were predicting all this a long time back, eg The Jetsons. It’s going a lot slower than they thought back then. 

Up
0

Artificial intelligence doesn't exist yet. We may never see it, however, we will recognise it if we see it.

Also, a thing people need to keep in mind when looking at videos of clankers like we see in China and even the USA is that they are highly choreographed. Hours and hours of testing and fiddling go into those displays. It's not a display of what it can do, it's just an advertisement for the company or the country.

If, in the future, you buy a clanker, it will be recording everything you do and sending it back to head office. It may even be remote controlled by someone in India. Do you really want that in your home?

Up
2

Artificial intelligence doesn't exist yet. 

Yeah it does. It's you and me. The intelligence in the universe is beyond our comprehension. Pick up a blade of grass, there's an energy and genetic framework there that over time can take that blade of grass and end up with an animal like us. 

So we are trying to make a digital replica of an analogue brain that's trying to make sense of the world using our limited sensory capabilities.

But, I also get your gist. We will make a digital brain that can simulate ours, but without the sensory and genetic underpinnings, it'll struggle properly replicating what our minds can do.

Up
0

Doesn't "artificial" mean "made by humans", an artifact? In a sense, I guess it is "you and me" in that it reflects human creativity. But I think some people misinterpret A.I. or future A.I. as human-like autonomous intelligence, capable of creativity. Something it is not.

Humans tend to anthropomorphize things. You write about how a digital brain would "struggle" however it wouldn't struggle in any sense like an analog human system would. It would be more like an electric drill failing to drive a screw in. No emotions, no feelings, operating below the diverse complexity of even a single human cell. The digital can never reach the complexity of analog biological systems. Systems that are chemical, can cope with noise, can self repair, can forget (very important), make mistakes, are often irrational and mysterious, not limited to on/off states and have remarkable redundancy

As for "robots" humans have thought about them for thousands of years, largely as a philosophical or literary device to try and understand ourselves. Something we can never truly achieve, but that doesn't stop us from trying, because we are, after all, human.

Up
1

Humans tend to anthropomorphize things. You write about how a digital brain would "struggle" however it wouldn't struggle in any sense like an analog human system would. It would be more like an electric drill failing to drive a screw in. No emotions, no feelings, operating below the diverse complexity of even a single human cell. The digital can never reach the complexity of analog biological systems. Systems that are chemical, can cope with noise, can self repair, can forget (very important), make mistakes, are often irrational and mysterious, not limited to on/off states and have remarkable redundancy

I agree with you, the AI models are simulating intelligence by mimicking our speech. Speech is our predominant means of communicating intelligence and defining things, but it's at the end of a long list of other processes that occur before we get to talking or writing about it.

And then as for creativity, often that happens outside of our standard approach to logic. Einstein attested that him realizing the theory of relativity was not a product of his intellect, and that it sorta came to him.

Up
1

Planning around the AI development that's broken cover is kind of too late, and local regulation will simply shove AI development and services in to laissez-faire jurisdictions, coupled to a decent VPN...

But...5 years ago, who would have thought that AI would come first, and so fast, for the rule governed professions, like law and accountancy, with professions like teaching and business analysis just waiting in the wings.

Maybe we need to value more of what is analogue, real and practical in a new kind of economy, as the price of rule governed (at least) intellectual tasks drops to near zero.

Up
0

Agreed. Those can can do and those that can't teach (or become politicians (or write comments)). AI replaces the jobs of a subset of the former and just causes trouble for the latter - identifying the use of AI to cheat. We need a 'No8 wire' economy. Rewarding those who realise there is a problem and are willing to work towards rational solutions.

Up
1