sign up log in
Want to go ad-free? Find out how, here.

Generative AI has introduced tantalising possibilities. Yet the initial excitement surrounding AI has given way to genuine and growing concerns. The IMF makes an early attempt to understand AI’s implications for growth, jobs, inequality, and finance

Public Policy / opinion
Generative AI has introduced tantalising possibilities. Yet the initial excitement surrounding AI has given way to genuine and growing concerns. The IMF makes an early attempt to understand AI’s implications for growth, jobs, inequality, and finance
AI frenzy

By Gita Bhatt

Full disclosure: The latest issue of Finance & Development was produced entirely with human intelligence. But someday soon at least parts of this magazine may be assisted by artificial intelligence—a topic that has dominated global discourse since ChatGPT's introduction one year ago.

Generative AI has introduced tantalising new possibilities in both the public and private spheres. Think how these “machines of the mind” can improve health care diagnoses, close education gaps, tackle food insecurity with more efficient farming, drive planetary exploration—not to mention eliminate the drudgery of work.

Yet the initial excitement surrounding AI has given way to genuine and growing concerns—including about the spread of misinformation that disrupts democracy and destabilises economies, threats to jobs across the skills spectrum, a widening of the gulf separating the haves and have-nots, and the proliferation of biases, both human and computational.

This issue is an early attempt to understand AI’s implications for growth, jobs, inequality, and finance. We bring together leading thinkers to explore how to prepare for an AI world.

In our lead article, Stanford’s Erik Brynjolfsson and Gabriel Unger­­ sketch two wildly different potential outcomes (beneficial or detrimental) for AI’s effect on each of three important facets of the economy—productivity growth, income inequality, and industrial concentration (the collective market share of the largest firms in a sector). The future that emerges will be a consequence of many things, including technological and policy decisions made today, they note.

For MIT’s Daron Acemoglu and Simon Johnson, AI’s ultimate impact depends on how it affects workers. Innovation always leads to higher productivity, but not always to shared prosperity, depending on whether machines complement or replace humans. The economists outline policies, such as giving labour a voice, that can redirect efforts away from pure automation toward a more “human-complementary” path that creates new and higher-quality tasks.

AI progresses by leaps and bounds. Given its inherent unpredictability, Anton Korinek, of the University of Virginia, recommends scenario planning. He lays out how different technological paths, depending on whether—and how soon—AI exceeds human intelligence, would lead to vastly different outcomes for the economy and workers. Policymakers should prepare reforms for these multiple scenarios and revise as the future unfolds, he notes.

This leads us to AI governance. Ian Bremmer, president of Eurasia Group, and Mustafa Suleyman, CEO of Inflection AI, point to regulatory challenges amid a race for AI supremacy among governments. They warn that governing AI will be among the international community’s most difficult challenges in coming decades and outline principles for AI policymaking.

The IMF’s Gita Gopinath urges balancing innovation and regulation in developing a unique set of policies for AI. Because AI operates across borders, we urgently need global cooperation to maximise the enormous opportunities of this technology while minimising the obvious harms to society, she writes.

In other thought-provoking articles, Daniel Björkegren and Joshua Blumenstock show how Kenya, Sierra Leone, and Togo adapted AI to benefit the poor. Nandan Nilekani describes how India is on a cusp of an AI revolution to address pressing economic and social challenges. And we profile Harvard labour economist Lawrence F. Katz, whose defining work on inequality illuminates the discussion on AI.

AI can develop in very different directions, underscoring the role of society in actively and collectively determining its future. What is clear is that the technology must be guided as tools that can enhance, rather than undermine, human potential and ingenuity. Ultimately, it’s about what AI can do to help people.


Gita Bhatt is the Head of Policy Communications at the IMF and Editor-In-Chief of Finance & Development Magazine. This article was originally posted here.

We welcome your comments below. If you are not already registered, please register to comment.

Remember we welcome robust, respectful and insightful debate. We don't welcome abusive or defamatory comments and will de-register those repeatedly making such comments. Our current comment policy is here.

4 Comments

"He lays out how different technological paths, depending on whether—and how soon—AI exceeds human intelligence"

I really can't take this "AI is getting smarter" thing seriously.  There is no Intelligence in it - just tricks like trawling millions of pages of our text (good and bad) and rehashing it back to us.

As far as I can see there is almost no work being done an anything that will have an IQ over 0.

Up
1

I really can't take this "AI is getting smarter" thing seriously.  There is no Intelligence in it - just tricks like trawling millions of pages of our text (good and bad) and rehashing it back to us

Isn't that effectively what our brains do? Soak up loads of info, make connections, then use it to create/recreate stuff. All just chemical/electrical signalling at the end of the day...

 

Up
2

The main difference I see between an animal and a computer is we have nerves that provide primitive drives (I'm hungry, ouch that's hot), and our intelligence is built up in layers on those basic feelings, becoming increasingly abstract thoughts and concepts at the top.

Computers are missing those fundamentals - success and failure feel the same to them ie. they don't feel anything and don't care about anything.  So it's just code with nothing to fall back on if it doesn't fit the environment in some circumstances, unless we give them code to fake it, and keep fiddling with it when it fails, etc. 

We could train animals for some task, and if we get it wrong they will try to bust out of the training because they will see it's not working (I'm hungry, I'm hot, fuck this).  A computer will just chug on to failure and not care.

This is why I think the route to AGI will be to start with analogues of animal nerves to provide the basic drives, then let intelligence emerge from the mess.  But I don't see much investment in ideas like that - the fast cash is in chatbots that replace helpdesks, language models that replace reporters, robots that replace drivers.

Up
2

AI raises interesting questions. Observers I have listened seem to view AI as a trend amplifier, rather than blue sky thinking tech. Your comment about success and failure having the same weighting is interesting. Unless self aware, there's no skin in the game for AI.

I wonder how AI telling us something we don't want to hear would go down? Like, your activities are driving civilisation to collapse and extinction? My guess it would be ignored like "World3" modelling.

The early attempts seem to just parrot whatever database they're plugged into, together with the human biases. Perhaps it can ferret out something of use the human data gatherers missed? Seems like a risky and expensive way to manage the future of human society.

The plan is to have everything humans do plugged into the net, where life is tracked, planned, controlled, provided for, by the machine.

In my view AI is probably 10% upside 90% downside. General AI ultimately 100% downside, if it happens. I value my freedom more than techno utopian sales pitches. Then again, no one asks me.

AI's definately the shortest route to perfecting weaponry. Bioweapons with associated vaccine for the chosen few. Autonomous weaponry. The MIC will be drooling.

Up
1