5 Strategies to Spur an Inclusive Global AI Governance

2 standing persons in an office looking at documents on a table

Artificial Intelligence (AI) development is advancing at a dizzying pace, especially for legal scholars working in a « time zone » where things moves much slower. Beyond regulation, we now need to reevaluate the normative tools we should explore to properly seize the opportunities and tackle the risks of AI.

Indeed, AI needs to be controlled. Beyond its exciting potential for new business models, processes or productivity gains, the adoption of AI carries real harms. As the deployment of generative AI tools such as ChatGPT has shown, AI could have a negative impact on human rights, the environment, national security and democracies (such as AI-generated biological and disinformation threats).

No single normative tool (e.g. standards, codes of conduct, ethical guidelines, ombuds institutions, regulatory sandboxes, etc.) will be able to cover all the angles needed to ensure that AI is used in line with social good objectives, such as the attainment of the UN’s Sustainable Development Goals. To reach that goal, we will have to develop integrated, agile and multilayered normative models.

Promising news are pointing in that direction.

There is now a strong consensus about the fact that AI governance is a global and interdependent challenge that will only be faced successfully if countries develop common strategies and rules, as “AI knows no borders”.

Even the U.S. and China seem to have joined the choir: the U.S. spearheaded the process that led to the adoption, in March, of a landmark global resolution on AI at the UN’s General Assembly calling for the development of safe, secure and trustworthy AI systems aligned with sustainable development objectives.

More recently, the UN adopted a Chinese resolution -with support from the U.S.- demanding that AI’s benefits be shared more equitably across the globe. Both countries -which are rivals on various fronts including in AI development- obviously want to play a major role in shaping the future of the technology on a global scale.

But if we are aware of the threat, know the importance of acting collectively to govern AI and have the capacity to put our act together, why don’t we act more swiftly and firmly ?

Here are some reasons why:

  • AI is a moving target (we need to prepare for what we know and for a lot we don’t)
  • States’ role on the international scene is declining, as large firms play a major influential function in many countries
  • The multilateral system is under immense pressure
  • The AI arms race is shaping the international geopolitical dynamics at a highly strategic and politicized level 
  • The international community has not yet reached a strong consensus on which AI risks should be addressed first

To establish the most effective approach to global AI governance, we should consider, at least, five strategies.

1. 

The Unilateral Extraterritorial Regulation Strategy 

This strategy rests on the following idea: enact effective regulation in one jurisdiction and then rely on the direct or extraterritorial effects of that regulation to influence AI governance in other jurisdictions. 

The clearest and strongest example of this is the AI ACT adopted by the European Union in 2024. Businesses wishing to operate in the EU will need to comply with the ACT, which will therefore influence regulation and business practices beyond the EU’s borders. This so-called “Brussels effect” could make the AI ACT the world’s most influential AI regulation. 

2. 

The Consolidation Strategy 

Different international organizations are already active in the AI field (such as the OECDUNESCO, the World Health Organization (WHO), and private entities like ISO and IEEE that lead the development of technical industry or professional standards). 

But there is a pressing need for enhanced coordination to streamline their efforts and amplify their effectiveness via initiatives like the recently created UN AI Advisory Body, which is presently mapping the global AI landscape to propose appropriate coordination strategies.

3. 

The New Player Strategy

The goal is to create a new international AI governing body that could set norms, share good AI practices from around the world, disseminate trustworthy AI knowledge and monitor trends in the development and deployment of AI. 

Models we could take inspiration from -and which contexts somewhat echo the current AI environment- are the International Panel on Climate Change (IPCC) or the International Atomic Energy Agency (IAEA). The IPCC was created to build scientific consensus in the context of a growing environmental consciousness at the international level, while the IAEA was founded as a tool for scientific and technical cooperation in response to the fears and expectations generated by nuclear technology.

4. 

The Global Norm-Setting Strategy

This strategy mostly involves developing new global legally binding instruments with conflict prevention and resolution mechanisms to fully support their implementation. 

The closest initiative is the Council of Europe’s recent Artificial Intelligence, Human Rights, Democracy and the Rule of Law Framework Convention. It was mostly negotiated between the Council’s 46 Member States and Observer States like Canada, but Global South countries were mostly left out of the negotiating process. Despites its value, it thus lacks the truly global and inclusive approach that’s needed to go forward and does not have any direct binding effect on the private sector, a major and ubiquitous player in AI.

5. 

The International Joint Research Strategy

The idea, mentioned from time to time, would be to find an AI equivalent of the European Organization for Nuclear Research (CERN), that is an international research organization where AI scientists from around the world -including from social sciences and humanities- would come together to work on AI projects aligned with social good objectives, such as AI for the environment, health or education.

Talents, resources, and computing power would be pooled to work on AI projects, which would contribute to creating a form of international monopoly on some types of AI research activities in addition to fostering scientific diplomacy.

What’s next?

We should immediately focus on strategy 2 (Consolidation) and then work, in the medium-longer term, on strategies 4 (Global-Norm Setting), 5 (International Joint Research) and 3 (New Player), in that order. Strategy 1 is not a viable option as it leaves out the interests of many actors and countries.

Global AI governance goes beyond setting boundaries for what is considered unacceptable in our world: it is a solidary covenant, a collective agreement that enhances safety and well-being for all in the realm of AI. It is a matter that should concern us all.