sign up log in
Want to go ad-free? Find out how, here.

Michael Strain predicts that artificial intelligence, like previous technological advances, will improve human welfare

Public Policy / opinion
Michael Strain predicts that artificial intelligence, like previous technological advances, will improve human welfare
brain-shaped printed circuit board

The rapid advances in artificial intelligence in recent months have unleashed a tidal wave of worries. Will this new technology substantially reduce employment by eliminating the need for most human workers? Will it undermine democracy? Does it pose an existential threat?

Concern about technological change is nothing new. But it typically addresses what economists would describe as marginal effects: whether a larger share of workers without college degrees will find it slightly harder to get jobs, or whether income inequality will increase to some extent. Unease about AI, on the other hand, is of a different order of magnitude, with some experts predicting that it could upend civilisation – or even wipe it out.

Tech leaders have argued that certain AI systems “pose profound risks to society and humanity,” a sentiment echoed by leading AI scientists. A recent YouGov poll found that nearly half of respondents are concerned “about the possibility that AI will cause the end of the human race on Earth.” Over two-thirds support a pause on some kinds of AI development.

This view is astonishingly pessimistic.

Let’s start with the basic – but seemingly overlooked – fact that technological advances improve human welfare. In 1800, 43% of children died before the age of five. But in the following centuries, technological progress led to the development of drugs and therapies, new ways of treating disease, and productivity and wage growth. By 1900, around one-third of children died in their first years of life. In 2017, global childhood mortality was down to 4%.

Moreover, agricultural technologies have boosted food production and preservation, reducing hunger, and advances in energy technology, such as electrification, have improved the lives of billions. Overall, technological innovation has reduced poverty by generating wealth.

To be sure, the process of “creative destruction” unleashed by generative AI will eliminate the need for human workers to perform many of their current tasks. But the pessimists must remember that creative destruction creates as well as destroys.

Economists and other experts in the early nineteenth century could never have predicted the types of tasks that workers perform in today’s world. How could John Stuart Mill have foreseen that technological advances would one day lead to jobs such as systems analyst, circuit layout designer, and fiber scientist? Imagine trying to explain Bruce Springsteen’s job to David Ricardo. There is no need to go back that far: the MIT economist David Autor and his co-authors found that the majority of current employment is in occupations introduced after 1940.

Similarly, concerns that AI poses a threat to democracy reflect undue pessimism. While “deepfakes” – AI-generated images and videos that are synthetic but appear real – of political leaders and candidates could be used in sinister ways, new technologies also enable authentication of videos and images. Such tools are already being developed, and the financial rewards from meeting the demand for them will ensure that they remain reliable.

In fact, democracy could be strengthened by advances in AI. One of the technology’s most promising opportunities is in education: AI applications could conceivably act as private tutors for every student. This should brighten the outlook for democracy’s long-term survival. As James Madison wrote, “Knowledge will forever govern ignorance: And a people who mean to be their own Governors, must arm themselves with the power which knowledge gives.”

Will AI wipe out humanity? It looks more likely to do the opposite. AI is already being used in drug development, including for COVID-19 vaccines. A future pandemic – perhaps one much more lethal than COVID-19 – might be stopped in its tracks by an AI-developed drug.

AI could also help scientists to gain a better understanding of volcanic activity, which has been responsible for mass extinctions, and to detect and eliminate the threat of an asteroid hitting the Earth. These optimistic scenarios seem more plausible than the pessimists’ view that AI could somehow use our nuclear weapons against us.

That is not to say that the ride won’t be bumpy. The rapid development of generative AI will disrupt labour markets, and the resulting economic turmoil will be painful for many workers. And it may take time before media and political leaders learn how to expose and shut down deepfakes.

Policymakers should not be complacent. Compared to past waves of automation, more should be done to help workers who face AI-related disruption. In some countries, it may be necessary to strengthen the social safety net. Like any powerful technology, generative AI should be appropriately regulated, with an eye toward ensuring its development is not unduly stifled.

The right solution is not to panic or indulge in undue pessimism. Instead, we should be reasonably confident that, like all general-purpose technologies before it – electricity, electronics, modern transportation, the internet – generative AI will improve human welfare. The specific changes AI will unleash in the economy and throughout society will be impossible to predict – but, on the whole, they will be changes for the better.


Michael R. Strain, Director of Economic Policy Studies at the American Enterprise Institute, is the author, most recently, of The American Dream Is Not Dead: (But Populism Could Kill It) (Templeton Press, 2020). This content is © Project Syndicate, 2023, and is here with permission.

We welcome your comments below. If you are not already registered, please register to comment.

Remember we welcome robust, respectful and insightful debate. We don't welcome abusive or defamatory comments and will de-register those repeatedly making such comments. Our current comment policy is here.

24 Comments

Who owns it and who profits from it.

It won't be most people. All of these technologies come with a cost.

Up
3

And the commercialization of them only leads to more/subsequent commercial and criminal entities jumping on the bandwagon to add to the applications of that technology.  To my mind, we only have to look at social media - it seems to me that the evidence to date suggests it does more harm than good. 

All that said, I simply do not think there is a way to "back this up".  I'm already having to contend with AI's effects in my work - one of those effects being a realization that AI can more efficiently replace certain tasks/aspects of what I do.  A bit like GHG emissions, what we find is that each and every person/nation/society is but a small cog in the wheel.  

Aristotle conceived and taught about what he called the five intellectual virtues , or virtues of the mind;

 

  1. Artistry or craftsmanship (Greek: techne)
  2. Prudence or practical wisdom (phronesis)
  3. Intuition or understanding (nous)
  4. Scientific knowledge (episteme)
  5. Philosophic wisdom (sophia)

If you think of AI in terms of this typology - it is interesting to contemplate which of these intellectual virtues it has and/or will develop. It's also of interest to note that he claimed that it was phronesis that directly informed the moral virtues (our ethics/morality). 

    

Up
2

I agree on the net downside to social media Kate.

I also think that AI will massively affect certain jobs over the next few years. I've already starting using chatgpt to do tasks in a few minutes that would take me an hour so, but I am normally an early adopter in these things. Once that becomes the norm with less techy people job sizes and workloads will have to be adjusted.

Up
1

Once that becomes the norm with less techy people job sizes and workloads will have to be adjusted.

And these days, the less techy people are by far adults - meaning young children/teens have already adopted the technology and will master it (and exploit it) much sooner than adults.

In other words, where tech is concerned I learn from my grandchildren every day :-).

 

 

 

Up
0

As I mentioned before, the fear of AI has little to do with AI, it's much more a result of human's fear of the unknown.  Just look at our overreaction to Covid, something unknown that can't be seen and which is threatening…some.  We're starting to pay for this overreaction now!  The same fears of loss of jobs happened at the advent of the industrial revolution over one hundred years ago and more recently with the arrival of the internet.  Sure, not everyone will benefit, and there will be cases of abuse (cybercrime for example).  But overall it will be largely beneficial for mankind.  We just have to get over our overinflated fear of new things.

Up
3

Here's a good question; do you think technology has led to people living more fulfilling lives, or less?

Not materially better, as in longer lives and nicer stuff, but emotionally?

We have crazy abundance and a surplus of misery.

Up
5

Yes I think technology has led to people living more fulfilling lives.

Up
0

I understand your view Yvil, but i would be a little more generous I think. I believe the warnings are more about understanding the negative impacts that just a few bad players can have on large parts of the population. I believe AI will help us solve some pressing issues and problems, but human nature is such that some players will use it simply to find ways to bypass security designed to protect everyone, and/or seek power, privilege and wealth for themselves over everyone else. Human history tells us those negative aspects are certain to happen. The call for a pause is to allow regulation, legislation and systems to be put in place to protect the majority. I agree that the overall benefit will likely be positive, but the potential for harm is I suggest, far greater with AI, than potentially any other human invention in history.

Up
1

Glad we agree Murray

Up
0

This guy's take may make you think again yvil

https://www.theguardian.com/commentisfree/2023/jun/16/ai-new-laws-powerful-open-source-tools-meta

The next few elections will be interesting with AI helping "some countries" meddle in others affairs...maybe NZ 2023 will be the testing ground?

Up
0

I'm less worried about AI and more worried about conventional intelligence, or rather the lack of it - something that has afflicted human kind persistently through the ages!

Up
4

The author takes the extremes and misses the real point.

AI tools and technology have the chance to do real harm if they are put in charge of things where they don't have an ethical basis for making decisions.  So, for example, if you don't program your automated car to avoid injuring humans properly and an OTA update goes out which results in it ignoring pedestrians, 1000 people might be killed before you step in to revoke the update.

That's the biggest threat now as AI takes the leap in understanding of comprehension of the messy real world - huge or distributed AI decision making technologies which have no ethical basis.  You don't need the machines to be self aware or anything like that for their actions to have deadly consequences.

Up
1

From an evolutionary biological perspective, there were homo-sapiens and now we are moving onto techno-sapiens - some would argue they could well do a better job of the 'sapient' bit :-).

 

 

 

Up
0

they don't have an ethical basis for making decisions

 

What is the ethical basis humans use for making decisions?

Up
0

Some form of humanism for non religious usually, religious people usually default to knowing ethics via their religious texts.

Up
1

Hmmm, that's a pretty wide range of opinions on what is the ethical thing to do for a lot of situations. Is AI really likely to be worse than those?

 

Up
0

Well you could start with the laws of robotics, if you really wanted a place to begin.  Then you would have to extend them to include things like lying or misleading information.  But you would have to make the AI know about its context and be able to view and project it's likely outcomes, which is generally more advanced than we have now.

Up
1

Good question - which is exactly why we should teach the philosophy of ethics from primary through to secondary.  I teach this at tertiary level and it is new (and very thought-provoking) knowledge for most of the students.

Up
1

I always remember when our prof first did the trolley problem in class...some students got very animated about it. They were obviously the deontologists...🙄

 

https://en.m.wikipedia.org/wiki/Trolley_problem

Up
1

Yes, I can imagine - lol.  I once linked to Gareth Morgan's "Cats To Go" website in a class - and that got very animated.

Up
0

Until now, tech has transformed work to jobs most of us could do (look, I'm now driving a tractor or a factory machine, not a horse or a handsaw).  We are now at the stage where the new jobs are beyond most of us.  Seriously, how many displaced office workers or drivers are going to rejoice at their new job as a robotics engineer or on the cutting edge of AI research?  Yes, some, for a while.

Our capabilities are being replaced, by the creations of our cleverest, and it was always going to happen and is unstoppable.  This is not more of the same - we've reached a new phase.  There will be huge overall benefits, and at the same time a whole lot of us are going to rapidly lose value in the workforce.  Automation is starting to pull away from us now.

Up
1

My prediction is that A.I. will make the digital world collapse into an inoperable swamp of cyber crime. 

Up
3

Very likely.  I imagine ill-intentioned nerds weaponised by AI unleashing tools probing every digital vulnerability.  Plus a hundred other problems, eg using CRISPR to tinker with bugs and see what happens.  This is a whole new shit storm.  An arms race of AI used by "good guys" to defend against an endless stream of scumbags and their latest tricks.

Up
2

I asked Bing Chat, which supposedly uses OpenAI's GPT-4, how long a chimpanzee's cecum was and it couldn't answer the question at all. OpenAI's ChatGPT could. Quite disturbing as it is an easy question to answer. It may be limiting answers that seem remotely medical. The censorship will be the death of this technology if that is the case.

Up
1