The small equity-research shop Citrini recently sent a panic through financial markets when it outlined a scenario in which AI ends most white-collar employment by 2028, with dire consequences for the broader economy. But this forecast is surely too pessimistic in some respects. Outside a few sectors, like software, frictions to adoption and sheer inertia will probably slow the pace of change. This has always been the case. For example, although automated telephone exchanges were possible in the 1920s, the last human telephone operator in the United States was not replaced until the 1980s.
Moreover, the technology itself is always only one variable. There also must be processes and structures around it to assure customers of reliable service. This is where incumbents have an advantage over challengers, even if they do not use the latest technology.
And even if incumbents are displaced, the new opportunities created by AI-induced cost reductions and productivity enhancements need not lead only to more AI. They may also require the work of humans – as with the internet and the rise of influencers.
Still, in some ways, the Citrini post is not pessimistic enough. Even setting aside the possibility that we might all become slaves to some AI overlord, the broader economic outcomes depend on how good AI gets and how fast; the pace of adoption by users; who profits from it; and how society reacts. Given all these variables, some extreme scenarios are indeed conceivable.
Consider, for example, a future in which a few differentiated platforms (say, Anthropic or Meta) reach a level of generalized AI that allows them to outpace the competition and steadily charge high prices to user firms. These dominant platforms would generate enormous profits, augmenting the incomes of their employees (who will be few, because AI will cull their ranks) and their shareholders. At the same time, the many firms relying on their services would be willing to pay, because AI would raise their own productivity, allowing them to shed more white-collar workers.
These unemployed workers would then look for work in adjacent industries where AI has not yet rendered their skills useless. But if those jobs are few, they will join lines for work as gardeners, waiters, and shop assistants, further depressing wages for these occupations. Assuming that AI displaces cognitive tasks before skilled physical ones, machinists, plumbers, and masons may still have work until robots become sophisticated enough. But over time, competition for those jobs will also increase as white-collar workers retrain. The pain will spread, and only the AI platforms and their investors will benefit. Or will they?
Before answering that, consider another “competitive” scenario in which no platforms “win” because there is little differentiation between ChatGPT 33.2, Gemini 25, and all the others. Although this scenario may still be devastating for white-collar jobs, prices for AI will be low, and the productivity benefits will flow through the economy, as will the resulting profits. Spared from expending enormous sums on AI, user firms could cut prices and expand production to meet the increased demand, implying more jobs elsewhere. There would be far less pain than in the first scenario, because lower-priced goods and services would allow pre-existing worker savings to go further.
Not only do current trends suggest that this second scenario is more likely than an AI oligopoly, but the government could take steps to ensure that it materializes, for example, through AI price regulations or a refusal to protect AI model builders from those who copy them. Would-be AI oligarchs should not assume that society will defend their enormous profits even as their products cause widespread job losses and hardship.
Of course, AI incumbents will lobby aggressively, corrupting some legislators to block regulation. They will mount public campaigns, using their many channels of influence to argue (not entirely incorrectly) that regulation will be ham-handed, harming efficiency and innovation while benefiting geopolitical rivals. But if the AI-induced pain is indeed widespread, the political impetus for intervention will remain strong.
Even if the state fails to ensure competitive AI prices, it can tax oligopolistic AI providers, their employees, and their shareholders to compensate the affected. The difficulty here lies in targeting. How do you identify those with supernormal profits from AI? How do you support those harmed, given how hard it has been to assist trade-affected workers in the past? And how do you distinguish between a technologically displaced worker and a worker laid off because of adverse business conditions or incompetence?
To avoid some of these questions, there will probably be a push for generous unemployment support regardless of the cause – a first step toward an eventual universal basic income. But this raises another problem, because even if fiscally strapped governments can raise sufficient revenues, there will still be many jobs that require human workers. Overly generous unemployment benefits therefore will push up the wages employers have to offer to coax workers out of unemployment, further reducing job creation.
Ultimately, there are no easy public responses to the problem of large-scale but not universal unemployment. Societies will have to experiment creatively, improving the safety net somewhat, while encouraging businesses to create jobs and reskill workers where possible. At the same time, if any of the AI platforms racing to achieve a near-monopoly does reach its goal, government policy reactions will almost certainly impair its profits. How, then, will these companies’ massive and still-growing debts be serviced? Will a financial crisis follow?
The best we can hope for is a Goldilocks scenario where the AI rollout is not so fast that workers cannot learn how to augment their jobs with AI, rather than being displaced; and where the AI industry is not too oligopolistic, so that the benefits accrue to society more broadly. Imaginative commentaries like the Citrini post force us to think about what might happen if the AI story turns out differently. Now is the time to map out the possible scenarios and start preparing for them.
Raghuram G. Rajan, a former governor of the Reserve Bank of India and chief economist of the International Monetary Fund, is Professor of Finance at the University of Chicago Booth School of Business and the co-author (with Rohit Lamba) of Breaking the Mold: India’s Untraveled Path to Prosperity (Princeton University Press, May 2024). Akhil Rajan also contributed to this commentary. Copyright: Project Syndicate, 2025, published here with permission.
6 Comments
The real work still needs to be done.....for how many will be the ultimate question should AI progress as anticipated.
All the major AI initiatives I see have significant reductions in FTE as realized benefit.
Group CEO talking 35% reduction, its a very big company.
These are not entry level jobs, and the same jobs will go at competitors as well.
Local CEO / CTO have FTE reduction KPI targets
Major centers like Auckland and Wellington will suffer these losses the most.
Likely to start seeing tax revenue impacts in 2027. Without growing taxes we will need to start cutting services.
As quantums appear the topic this morning it begs the question how much work needs to be done for how many and crucially how are the limited resources allocated.
AI may have an unpalatable answer to those questions.
Its interesting, that human survival is really determined by the lower levels of Maslow's hierarchy of needs.
The lower levels keep you alive but the higher levels keep you wanting to be alive. AI will probably ensure that the lower levels are available for every human, but at what cost to our sense of purpose as both individuals and a species.
We have built incredibly complex institutions on top of this framework, will they lose funding and technically no longer be required. Universities can charge large fee's based on your potential income after graduation, in the new world AI will be able to tutor you in any topic for free, but you will be unlikely to be able to charge anyone for your learning. Uni's will become research institutions only?
Lets look 25 years into the future.
- What does democracy achieve in a world where human intelligence is neither required nor takes part in commerce?
- Would AI listen to democratic leaders decisions, why would it need to? Can it be made to?
- Unions are no longer required unless its perhaps a union for humans...
- Even governments will no longer require many human bureaucrats.
- Will we accept being judged by AI in a court of Law? even if its a human judge being assisted by AI reasoning?
- How will we divide scare resources if every person is paid the same universal benefit? eat your gruel and be happy.
- Once we get dependent on AI running the world, what happens once the power goes out in a massive earthquake or war situation where undersea cables are cut on purpose.
- What does war look like in this world? AI models seem very willing to use nukes in wargaming situations.
- How is current debt repaid in a world where humans can no longer effectively sell intelligence.
I have seen what AI can achieve and its transformational compared to a world where human intelligence is expensive and limited, we will be able to accelerate technology at an amazing speed. I am not against AI, I just ask what are the wider consequences of this on society.
When do we start to have this conversation as a society?
My questions are more basic....how many humans are needed to ensure the progression of AI...when AI is increasingly framing our questions?
AI is now writing the next release of its own codebase, not maybe humans are now required.
We welcome your comments below. If you are not already registered, please register to comment
Remember we welcome robust, respectful and insightful debate. We don't welcome abusive or defamatory comments and will de-register those repeatedly making such comments. Our current comment policy is here.