sign up log in
Want to go ad-free? Find out how, here.

Ian Bremmer et al. propose five core principles to maximise the benefits and minimise the risks of artificial intelligence systems

Public Policy / opinion
Ian Bremmer et al. propose five core principles to maximise the benefits and minimise the risks of artificial intelligence systems
global spheres or rings

By Ian Bremmer, Carme Artigas, James Manyika, and Marietje Schaake*

Although artificial intelligence has been quietly helping us for decades, with progress accelerating in recent years, 2023 will be remembered as a “big bang” moment. With the advent of generative AI, the technology has broken through in popular consciousness and is shaping public discourse, influencing investment and economic activity, sparking geopolitical competition, and changing all manner of human activities, from education to health care to the arts. Each week brings some new breathtaking development. AI is not going away, and change is accelerating.

Policymaking is moving almost as fast, with the launch of new regulatory initiatives and fora seeking to meet the moment. But while ongoing efforts by the G7, the European Union, and the United States are encouraging, none of them is universal, representing the global commons. In fact, with AI development driven by a handful of CEOs and market actors in just a few countries, the voices of the majority, particularly from the Global South, have been absent from governance discussions.

The unique challenges that AI poses demand a coordinated global approach to governance, and only one institution has the inclusive legitimacy needed to organize such a response: the United Nations. We must get AI governance right if we are to harness its potential and mitigate its risks. With that in mind, the UN High-level Advisory Body on AI was established to offer analysis and recommendations for addressing the global governance deficit. It comprises a group of 38 individuals from around the world, representing a diversity of geographies, gender, disciplinary backgrounds, and age, and drawing on expertise from government, civil society, the private sector, and academia.

We feel privileged to serve as the Advisory Body’s Executive Committee. Today, we released the group’s interim report which proposes five principles for anchoring AI governance and addressing several interrelated challenges.

First, since the risks differ across diverse global contexts, each will require solutions that are tailored accordingly. But that means recognizing how rights and freedoms can be jeopardized by specific design, use (and misuse), and governance choices. Failing to apply AI constructively – what we call “missed uses” – can needlessly exacerbate existing problems and inequalities.

Second, since AI is a tool for economic, scientific, and social development, and since it is already assisting people in daily life, it must be governed in the public interest. That means bearing in mind goals related to equity, sustainability, and societal and individual well-being, as well as broader structural issues like competitive markets and healthy innovation ecosystems.

Third, the emerging regulatory frameworks across different regions will need to be harmonized in order to address AI’s global governance challenges effectively. Fourth, AI governance should go hand in hand with measures to uphold agency and to protect privacy and the security of personal data. Lastly, governance should be anchored in the UN Charter, international human-rights law, and other international commitments where there is a broad global consensus, such as the Sustainable Development Goals.

Affirming these principles in the context of AI requires overcoming some stubborn challenges. AI is built on massive amounts of computing power, data, and – of course – specific human talents. Global governance must consider how to develop and ensure broad access to all three. It also must address capacity building for the basic infrastructure that underpins the AI ecosystem – such as reliable broadband and electricity – especially for the Global South.

Greater efforts also are needed to confront both known and still-unknowable risks that could emerge from AI’s development, deployment, or use. AI risk is a hotly debated subject. While some focus on eventual end-of-humanity scenarios, others are more worried about the harms to people here and now; but there is little disagreement that the risks of ungoverned AI are unacceptable.

Good governance is anchored in solid evidence. We foresee the need for objective assessments of the state of AI and its trajectory, to give citizens and governments a sound foundation for policy and regulation. At the same time, an analytical observatory to assess AI’s societal impact – from job displacement to national-security threats – would help policymakers keep up with the immense changes that AI is driving offline. The international community will need to develop a capacity to police itself, including by monitoring and responding to potentially destabilizing incidents (as major central banks do in the face of financial crises), and by facilitating accountability and even enforcement action.

These are just a few of the recommendations we are advancing. They should be seen as a floor, not a ceiling. More than anything, they are an invitation for more people to tell us what kinds of AI governance they would like to see.

If AI is to fulfill its global potential, new structures and guardrails are needed to help us all thrive as it evolves. Everyone has a stake in AI’s safe, equitable, and accountable development. The risks of inaction are also clear. We believe that global AI governance is essential to reap the significant opportunities and navigate the risks that this technology presents for every state, community, and individual today and for generations to come.


Ian Bremmer, Carme Artigas, James Manyika, and Marietje Schaake are members of the Executive Committee of the UN High-level Advisory Body on Artificial Intelligence..Ian Bremmer is Founder and President of Eurasia Group and GZERO Media. Carme Artigas is Secretary of State for Digitalization and Artificial Intelligence of Spain. James Manyika is Senior Vice President of Research, Technology, and Society at Google/Alphabet. Marietje Schaake, a former member of the European Parliament, is Policy Director of the Cyber Policy Center at Stanford University and President of the CyberPeace Institute. Copyright: Project Syndicate, 2023, and published here with permission.

We welcome your comments below. If you are not already registered, please register to comment.

Remember we welcome robust, respectful and insightful debate. We don't welcome abusive or defamatory comments and will de-register those repeatedly making such comments. Our current comment policy is here.

1 Comments

Game theory dictates that the “alignment” safety rails spoken about so positively here will become obsolete the second any uncensored model outperforms.  It’s so obvious.  Regulation is at worst dangerous, and at best a waste of time.

Up
0