Portrait de Benjamin Prud'homme

Benjamin Prud'homme

Vice-président, Politiques publiques, sécurité et affaires mondiales, Équipe de direction

Biographie

Benjamin Prud'homme est Vice-président, Politiques publiques, sécurité et affaires mondiales. Il est membre-expert à l'OCDE (OCDE.AI), aux Nations Unies (Réseau consultatif d'experts en IA) et à l'UNESCO (AI Ethics Experts Without Borders). Il co-dirige le projet Creating Diversity and Substantive Equality in AI Ecoystems pour le Partenariat mondial sur l'IA (PMIA) et contribue au Rapport scientifique international sur la sécurité de l'IA avancée, présidé par Yoshua Bengio. En 2023, il a co-dirigé la publication Mila-UNESCO « Angles morts de la gouvernance de l'IA ». Benjamin est avocat et siège sur les conseils d'administration de l'Association canadienne des libertés civile, de l'Observatoire québécois des inégalités et de l'Aide juridique (Montréal).

Publications

International AI Safety Report: First Key Update, Capabilities and Risk Implications
Prof. Yoshua Bengio
Stephen Clare
Carina Prunkl
Maksym Andriushchenko
BEN BUCKNALL
Philip Fox
Tiancheng Hu
Cameron Jones
Sam Manning
Nestor Maslej
Vasilios Mavroudis
Conor McGlynn
Malcolm Murray
Charlotte Stix
Lucia Velasco
Nicole Wheeler
Daniel Privitera
Daron Acemoglu … (voir 36 de plus)
Thomas G. Dietterich
Fredrik Heintz
Geoffrey Hinton
Nick Jennings
Susan Leavy
Teresa Ludermir
Vidushi Marda
Helen Margetts
John McDermid
Jane Munga
Arvind Narayanan
Alondra Nelson
Clara Neppel
Sarvapali D. (Gopal) Ramchurn
Stuart Russell
Marietje Schaake
Bernhard Schölkopf
Alvaro Soto
Lee Tiedrich
Andrew Yao
Ya-Qin Zhang
Lambrini Das
Claire Dennis
Arianna Dini
Freya Hempleman
Samuel Kenny
Patrick King
Hannah Merchant
Jamie-Day Rawal
Rose Woolhouse
The field of AI is moving too quickly for a single yearly publication to keep pace. Significant changes can occur on a timescale of months, … (voir plus)sometimes weeks. This is why we are releasing Key Updates: shorter, focused reports that highlight the most important developments between full editions of the International AI Safety Report. With these updates, we aim to provide policymakers, researchers, and the public with up-to-date information to support wise decisions about AI governance. This first Key Update focuses on areas where especially significant changes have occurred since January 2025: advances in general-purpose AI systems' capabilities, and the implications for several critical risks. New training techniques have enabled AI systems to reason step-by-step and operate autonomously for longer periods, allowing them to tackle more kinds of work. However, these same advances create new challenges across biological risks, cyber security, and oversight of AI systems themselves. The International AI Safety Report is intended to help readers assess, anticipate, and manage risks from general-purpose AI systems. These Key Updates ensure that critical developments receive timely attention as the field rapidly evolves.
International AI Safety Report
Bronwyn Fox
André Carlos Ponce de Leon Ferreira de Carvalho
Mona Nemer
Raquel Pezoa Rivera
Yi Zeng
Juha Heikkilä
Guillaume Avrin
Antonio Krüger
Balaraman Ravindran
Hammam Riza
Ciarán Seoighe
Ziv Katzir
Andrea Monti
Hiroaki Kitano
Nusu Mwamanzi
Fahad Albalawi
José Ramón López Portillo
Haroon Sheikh
Gill Jolly … (voir 86 de plus)
Olubunmi Ajala
Jerry Sheehan
Dominic Vincent Ligot
Kyoung Mu Lee
Crystal Rugege
Denise Wong
Nuria Oliver
Christian Busch
Ahmet Halit Hatip
Oleksii Molchanovskyi
Marwan Alserkal
Chris Johnson
Amandeep Singh Gill
Saif M. Khan
Daniel Privitera
Tamay Besiroglu
Rishi Bommasani
Stephen Casper
Yejin Choi
Philip Fox
Ben Garfinkel
Danielle Goldfarb
Hoda Heidari
Anson Ho
Sayash Kapoor
Leila Khalatbari
Shayne Longpre
Sam Manning
Vasilios Mavroudis
Mantas Mazeika
Julian Michael
Jessica Newman
Kwan Yee Ng
Chinasa T. Okolo
Deborah Raji
Girish Sastry
Elizabeth Seger
Theodora Skeadas
Tobin South
Daron Acemoglu
Olubayo Adekanmbi
David Dalrymple
Thomas G. Dietterich
Edward W. Felten
Pascale Fung
Pierre-Olivier Gourinchas
Fredrik Heintz
Geoffrey Hinton
Nick Jennings
Andreas Krause
Susan Leavy
Percy Liang
Teresa Ludermir
Vidushi Marda
Emma Strubell
Florian Tramèr
Lucia Velasco
Nicole Wheeler
Helen Margetts
John McDermid
Jane Munga
Arvind Narayanan
Alondra Nelson
Clara Neppel
Alice Oh
Gopal Ramchurn
Stuart Russell
Marietje Schaake
Bernhard Schölkopf
Dawn Song
Alvaro Soto
Lee Tiedrich
Andrew Yao
Ya-Qin Zhang
Baran Acar
Ben Clifford
Lambrini Das
Claire Dennis
Freya Hempleman
Hannah Merchant
Rian Overy
Ben Snodin
Benjamin Prud’homme
The first International AI Safety Report comprehensively synthesizes the current evidence on the capabilities, risks, and safety of advanced… (voir plus) AI systems. The report was mandated by the nations attending the AI Safety Summit in Bletchley, UK. Thirty nations, the UN, the OECD, and the EU each nominated a representative to the report's Expert Advisory Panel. A total of 100 AI experts contributed, representing diverse perspectives and disciplines. Led by the report's Chair, these independent experts collectively had full discretion over the report's content.
The Singapore Consensus on Global AI Safety Research Priorities
Luke Ong
Stuart Russell
Dawn Song
Max Tegmark
Lan Xue
Ya-Qin Zhang
Stephen Casper
Wan Sie Lee
Vanessa Wilfred
Vidhisha Balachandran
Fazl Barez
Michael Belinsky
Ima Bello
Malo Bourgon
Mark Brakel
Simeon Campos
Duncan Cass-Beggs … (voir 67 de plus)
Jiahao Chen
Rumman Chowdhury
Chua Kuan Seah
Jeff Clune
Juntao Dai
Agnes Delaborde
Francisco Eiras
Joshua Engels
Jinyu Fan
Adam Gleave
Noah Goodman
Fynn Heide
Johannes Heidecke
Dan Hendrycks
Cyrus Hodes
Bryan Low
Minlie Huang
Sami Jawhar
Jingyu Wang
Adam Kalai
Meindert Kamphuis
Mohan Kankanhalli
Subhash Kantamneni
Mathias Kirk Bonde
Thomas Kwa
Jeffrey Ladish
Kwok Yan Lam
Wan Sie Lee
Taewhi Lee
Xiaojian Li
Jiajun Liu
Chaochao Lu
Yifan Mai
Richard Mallah
Julian Michael
Nicolas Moës
Simon Moeller
Kihyuk Nam
Kwan Yee Ng
Mark Nitzberg
Besmira Nushi
Seán Ó hÉigeartaigh
Alejandro Ortega
Pierre Peigné
James Petrie
Nayat Sanchez-Pi
Sarah Schwettmann
Buck Shlegeris
SAAD SIDDIQUI
Anu Sinha
Martin Soto
Cheston Tan
Anthony Tung
William Tjhi
Robert Trager
Brian Tse
Anthony Tung
John Willes
Denise Wong
Wei Xu
Rongwu Xu
Yi Zeng
Hongjiang Zhang
Djordje Zikelic
Rapidly improving AI capabilities and autonomy hold significant promise of transformation, but are also driving vigorous debate on how to en… (voir plus)sure that AI is safe, i.e., trustworthy, reliable, and secure. Building a trusted ecosystem is therefore essential – it helps people embrace AI with confidence and gives maximal space for innovation while avoiding backlash. This requires policymakers, industry, researchers and the broader public to collectively work toward securing positive outcomes from AI’s development. AI safety research is a key dimension. Given that the state of science today for building trustworthy AI does not fully cover all risks, accelerated investment in research is required to keep pace with commercially driven growth in system capabilities. Goals: The 2025 Singapore Conference on AI (SCAI): International Scientific Exchange on AI Safety aims to support research in this important space by bringing together AI scientists across geographies to identify and synthesise research priorities in AI safety. The result, The Singapore Consensus on Global AI Safety Research Priorities, builds on the International AI Safety Report-A (IAISR) chaired by Yoshua Bengio and backed by 33 governments. By adopting a defence-in-depth model, this document organises AI safety research domains into three types: challenges with creating trustworthy AI systems (Development), challenges with evaluating their risks (Assessment), and challenges with monitoring and intervening after deployment (Control). Through the Singapore Consensus, we hope to globally facilitate meaningful conversations between AI scientists and AI policymakers for maximally beneficial outcomes. Our goal is to enable more impactful R&D efforts to rapidly develop safety and evaluation mechanisms and foster a trusted ecosystem where AI is harnessed for the public good.
Inherent privacy limitations of decentralized contact tracing apps
Daphne Ippolito
Richard Janda
Max Jarvie
Jean-François Rousseau
Abhinav Sharma
Yun William Yu
COVI White Paper - Version 1.1
Hannah Alsdurf
Prateek Gupta
Daphne Ippolito
Richard Janda
Max Jarvies
Tyler Kolody
Sekoul Krastev
Robert Obryk
Dan Pilat
Nasim Rahaman
Jean-François Rousseau
Abhinav Sharma
Brooke Struck … (voir 3 de plus)
Yun William Yu
The SARS-CoV-2 (Covid-19) pandemic has caused significant strain on public health institutions around the world. Contact tracing is an essen… (voir plus)tial tool to change the course of the Covid-19 pandemic. Manual contact tracing of Covid-19 cases has significant challenges that limit the ability of public health authorities to minimize community infections. Personalized peer-to-peer contact tracing through the use of mobile apps has the potential to shift the paradigm. Some countries have deployed centralized tracking systems, but more privacy-protecting decentralized systems offer much of the same benefit without concentrating data in the hands of a state authority or for-profit corporations. Machine learning methods can circumvent some of the limitations of standard digital tracing by incorporating many clues and their uncertainty into a more graded and precise estimation of infection risk. The estimated risk can provide early risk awareness, personalized recommendations and relevant information to the user. Finally, non-identifying risk data can inform epidemiological models trained jointly with the machine learning predictor. These models can provide statistical evidence for the importance of factors involved in disease transmission. They can also be used to monitor, evaluate and optimize health policy and (de)confinement scenarios according to medical and economic productivity indicators. However, such a strategy based on mobile apps and machine learning should proactively mitigate potential ethical and privacy risks, which could have substantial impacts on society (not only impacts on health but also impacts such as stigmatization and abuse of personal data). Here, we present an overview of the rationale, design, ethical considerations and privacy strategy of `COVI,' a Covid-19 public peer-to-peer contact tracing and risk awareness mobile application developed in Canada.
COVI White Paper-Version 1.1
H. Alsdurf
T. Deleu
Prateek Gupta
Daphne Ippolito
R. Janda
Max Jarvie
Tyler Kolody
S. Krastev
Robert Obryk
D. Pilat
Nasim Rahaman
I. Rish
J. Rousseau
Abhinav Sharma
B. Struck … (voir 3 de plus)
Yun William Yu