sign up log in
Want to go ad-free? Find out how, here.

Edmund Phelps considers the effects of automation and artificial intelligence from the perspective of welfare economics

Edmund Phelps considers the effects of automation and artificial intelligence from the perspective of welfare economics

The robots are no longer coming; they are here. The COVID-19 pandemic is hastening the spread of artificial intelligence (AI), but few have fully considered the short- and long-run consequences.

In thinking about AI, it is natural to start from the perspective of welfare economics – productivity and distribution. What are the economic effects of robots that can replicate human labour? Such concerns are not new. In the nineteenth century, many feared that new mechanical and industrial innovations would “replace” workers. The same concerns are being echoed today.

Consider a model of a national economy in which labour performed by robots matches that performed by humans. The total volume of labour – robotic and human – will reflect the number of human workers, H, plus the number of robots, R. Here, the robots are additive – they add to the labour force rather than multiplying human productivity. To complete the model in the simplest way, suppose the economy has just one sector, and that aggregate output is produced by capital and total labour, human and robotic. This output provides for the country’s consumption, with the rest going toward investment, thus increasing the capital stock.

What is the initial economic impact when these additive robots arrive? Elementary economics shows that an increase in total labour relative to initial capital – a drop in the capital-labour ratio – causes wages to drop and profits to rise.

There are three points to add. First, the results would be magnified if the additive robots were created from refashioned capital goods. That would yield the same increase in total labour, with a commensurate reduction in the capital stock, but the drop in the wage rate and the increase in the rate of profit would be greater.

Second, nothing would change if we adopted the Austrian School’s two-sector framework in which labour produces the capital good and the capital good produces the consumer good. The arrival of robots still would decrease the capital-labour ratio, as it did in the one-sector scenario.

Third, there is a striking parallel between the model’s additive robots and newly arrived immigrants in their impact on native workers. By pushing down the capital-labour ratio, immigrants, too, initially cause wages to drop and profits to rise. But it should be noted that with the rate of profit elevated, the rate of investment will rise. Owing to the law of diminishing returns, that additional investment will drive down the profit rate until it has fallen back to normal. At this point, the capital-labour ratio will be back to where it was before the robots arrived, and the wage rate will be pulled back up.

To be sure, the general public tends to assume that “robotisation” (and automation generally) leads to a permanent disappearance of jobs, and thus to the “immiseration” of the working class. But such fears are exaggerated. The two models described above abstract from the familiar technological progress that drives up productivity and wages, making it reasonable to anticipate that the global economy will sustain some level of growth in labour productivity and compensation per worker.

True, sustained robotisation would leave wages on a lower path than they otherwise would have taken, which would create social and political problems. It may prove desirable, as Bill Gates once suggested, to levy taxes on income from robot labour, just as countries levy taxes on income from human labour. This idea deserves careful consideration. But fears of prolonged robotisation appear unrealistic. If robotic labour increased at a non-vanishing pace, it would run into limits of space, atmosphere, and so on.

Moreover, AI has brought not just “additive” robots but also “multiplicative” robots that enhance workers’ productivity. Some multiplicative robots enable people to work faster or more effectively (as in AI-assisted surgery), while others help people complete tasks they otherwise could not perform.

The arrival of multiplicative robots need not lead to a lengthy recession of aggregate employment and wages. Yet, like additive robots, they have their “downsides.” Many AI applications are not entirely safe. The obvious example is self-driving cars, which can (and have) run into pedestrians or other cars. But, of course, so do human drivers.

A society is not wrong, in principle, to deploy robots that are prone to occasional mistakes, just as we tolerate airplane pilots who are not perfect. We must judge costs and benefits. For efficiency, people ought to have the right to sue robots’ owners for damages. Inevitably, a society will feel uncomfortable with new methods that introduce “uncertainty.”

From the perspective of ethics, the interface with AI involves “imperfect” and “asymmetric” information. As Wendy Hall of the University of Southampton says, amplifying Nicholas Beale, “We can’t just rely on AI systems to act ethically because their objectives seem ethically neutral.”

Indeed, some new devices can cause serious harm. Implantable chips for cognitive enhancement, for example, can cause irreversible tissue damage in the brain. The question, then, is whether laws and procedures can be instituted to protect people from a reasonable degree of harm. Barring that, many are calling on Silicon Valley companies to establish their own “ethics committees.”

All of this reminds me of the criticism leveled at innovations throughout the history of free-market capitalism. One such critique, the book Gemeinschaft und Gesellschaft by the sociologist Ferdinand Tönnies, ultimately became influential in Germany in the 1920s and led to the “corporatism” arising there and in Italy in the interwar period – thus bringing an end to the market economy in those countries.

Clearly, how we address the problems raised by AI will be highly consequential. But they are not yet present on a wide scale, and they are not the main cause of the dissatisfaction and resulting polarisation that have gripped the West.

Edmund S. Phelps, the 2006 Nobel laureate in economics and Director of the Center on Capitalism and Society at Columbia University, is author of Mass Flourishing and co-author of Dynamism. This content is © Project Syndicate, 2020, and is here with permission.

We welcome your comments below. If you are not already registered, please register to comment.

Remember we welcome robust, respectful and insightful debate. We don't welcome abusive or defamatory comments and will de-register those repeatedly making such comments. Our current comment policy is here.


Robots accelerate gains to capital, at the cost of the labour market. It's great that our productivity is ever increasing with our rapid technological growth, but the labour market is taking an ever decreasing share of the pie. This is an issue in a system that needs growth to simply maintain a level of prosperity and wellbeing.

With the labour market letting down the economy, there needs to be a viable supplement to the distribution of that productivity to maintain the consumer base that powers the economy. A citizens dividend or even as far as a UBI would be ideal, as long as it goes in hand with a land tax, to improve efficient and productive occupation of land. Otherwise money will simply continue to pump up asset values, watering down the effectiveness of capital expenditure rather than being used productively.

This economic solution is Georgism.


The article's 'robotisation' appears to concentrate on a rather narrow band of - er - Robots. What is less obvious but much more prevasive, is the automation of much accounting. Documents are passed electronically from supplier to consumer (no paper, no copiers, no postage, no mail clerks, no re-keying), from sales to customer, from stages along a vertical chain (supermarkets are a classic example), and between FMCG outlets or distribution centres, direct to consumer. There's a lot of clever stuff along these chains, and two decades plus of experience. The end result is the same: less need for wetware doing repetitive tasks indifferently....


I'm not sure if your electronic transmittal actually counts as "robotics". I know from personal experience there are many layers of robotics. Job displacement by robots is one, automation is another, AI and "fuzzy logic" is another yet again. but those advances are neither mutually inclusive or exclusive per se. There will be task and skill displacement and therefor employment transition, no different to the loss of farriers and wheelwrights when society moved from horse-drawn carriages to motor vehicles and were replaced by mechanics, auto electricians and tyre fitters. Robotics and it's attendant tech are a forgone conclusion.. they are here and expanding. It is how we deal with the transition that will be the crux


There are hardware robots (building cars) and software robots (like your accounting systems), and they both displace human labour. So in the context of the issues this article raises I don't think the distinction matters.
I see govts getting increasingly desperate to pin down and tax rich listers and big corporates to be able to support/pacify populations increasingly reliant on transfer payments. Computers may still be pathetic (in 50 years, millions of the smartest people on Earth have not succeeded in raising the IQ of computers above 0) but even with machines reliant on us to pre-think everything, they are being relentlessly put to work that humans used to do.


Your comment Rob, Computers may still be pathetic (in 50 years, millions of the smartest people on Earth have not succeeded in raising the IQ of computers above 0). There are Quantum computers, sure they are experimental at the moment, Their speed of processing is mind blowing, the major computer companies are developing them right now including Microsoft, IBM etc. They are developing software for them right now. In the future, they will be home computers, eventually they will be Iphones and cell and phone networks with incredible power and with AI who knows.


Do you know if they will be qualitatively different? (genuine question - you sound like you know more about the future on this than me)
If they are just way faster at running the code or algorithms we will still have to give them then their IQ will still be 0. ie. we make a mistake and accidentally tell them that smashing into a wall is success, and they will smash into a wall until something with a brain spots the bug.
I know people have been writing self learning algorithms for decades, but the starting point is always people, and the algorithms need things like data generated by people saying that is a cat, not a dog (and if they get it wrong, we have the wall smashing problem). So what will really change?