Wednesday, January 29, 2025 – First announced at the November 2023 AI Safety Summit at Bletchley Park, the report, inspired by the UN’s Intergovernmental Panel on Climate Change report, brings together leading international expertise with the support of the UK Department for Science, Innovation and Technology.
Led by Yoshua Bengio, Full Professor at Université de Montréal, Founder and Scientific Director of Mila and Canada CIFAR AI Chair, and a team of 96 international experts nominated by 30 countries, the UN, EU, and OECD, it will now inform discussions at the upcoming AI Action Summit in France and serve as a global handbook on AI safety to help support policymakers.
Towards a common understanding of advanced AI systems and their risks
The most advanced AI systems in the world now have the ability to write increasingly sophisticated computer programs, identify cyber vulnerabilities, and perform on a par with human PhD-level experts on tests in biology, chemistry, and physics.
The first independent International AI Safety Report published today sets out that AI systems are also increasingly capable of acting as AI agents—autonomously planning and acting in pursuit of a goal— in what is identified as a key development for policymakers to monitor.
As policymakers worldwide grapple with the rapid and unpredictable advancements in AI, the report contributes to bridging the gap by offering a scientific understanding of emerging risks to guide decision making. The document sets out the first comprehensive, shared scientific understanding of advanced AI systems and their risks, highlighting how quickly the technology has evolved in recent years and months. Several areas require urgent research attention, including how rapidly capabilities will advance, how general-purpose AI models work internally, and how they can be designed to behave reliably.
The report sets out three distinct categories of AI risks:
- Malicious use risks: including cyberattacks, the creation of AI-generated child sexual abuse material, and even the development of biological weapons.
- System malfunctions: which include bias, reliability issues, and the potential loss of control over advanced general-purpose AI systems.
- Systemic risks: stemming from the widespread adoption of AI, include workforce disruption, privacy concerns, and environmental impacts.
The report places particular emphasis on the urgency of increasing transparency and understanding in AI decision-making as the systems become more sophisticated and the technology continues to develop at incredible pace.
While there are still many challenges in mitigating the risks of general-purpose AI, the report highlights promising areas for future research and concludes that progress can be made. Ultimately, it emphasizes that while AI capabilities could advance at varying speeds, their development and potential risks are not a foregone conclusion, and concludes by saying that the outcomes depend on the choices that societies and governments make today and in the future.
Quote
"The capabilities of general-purpose AI have increased rapidly in recent years and months. While this holds great potential for society, AI also presents significant risks that must be carefully managed by governments worldwide. This report by independent experts aims to facilitate constructive and evidence-based discussion around these risks and serves as a common basis for policymakers around the world to understand general-purpose AI capabilities, risks and possible mitigations," said Yoshua Bengio, Founder and Scientific Director of Mila and Chair of the report.
More information
- A full copy of the report can be found here.
- The UK Government will continue to provide the Secretariat for the Report, and Mila’s Yoshua Bengio will continue acting as chair for 2025.