Publications

The Singapore Consensus on Global AI Safety Research Priorities
Luke Ong
Stuart Russell
Dawn Song
Max Tegmark
Lan Xue
Ya-Qin Zhang
Stephen Casper
Wan Sie Lee
Vanessa Wilfred
Vidhisha Balachandran
Fazl Barez
Michael Belinsky
Imane Bello
Malo Bourgon
Mark Brakel
Sim'eon Campos
Duncan Cass-Beggs … (see 67 more)
Jiahao Chen
Rumman Chowdhury
Kuan Chua Seah
Jeff Clune
Juntao Dai
Agnès Delaborde
Francisco Eiras
Joshua Engels
Jinyu Fan
Adam Gleave
Noah D. Goodman
Fynn Heide
Johannes Heidecke
Dan Hendrycks
Cyrus Hodes
Bryan Low Kian Hsiang
Minlie Huang
Sami Jawhar
Jingyu Wang
Adam Tauman Kalai
Meindert Kamphuis
Mohan S. Kankanhalli
Subhash Kantamneni
Mathias Bonde Kirk
Thomas Kwa
Jeffrey Ladish
Kwok-Yan Lam
Wan Lee Sie
Taewhi Lee
Xiaojian Li
Jiajun Liu
Chaochao Lu
Yifan Mai
Richard Mallah
Julian Michael
Nick Moës
Simon Möller
Kihyuk Nam
Kwan Yee Ng
Mark Nitzberg
Besmira Nushi
Sean O hEigeartaigh
Alejandro Ortega
Pierre Peigné
James Petrie
Nayat Sanchez-Pi
Sarah Schwettmann
Buck Shlegeris
Saad Siddiqui
Aradhana Sinha
Martín Soto
Cheston Tan
Dong Ting
William-Chandra Tjhi
Robert Trager
Brian Tse
H. AnthonyTungK.
John Willes
Denise Wong
W. Xu
Rongwu Xu
Yi Zeng
HongJiang Zhang
Djordje Zikelic
Rapidly improving AI capabilities and autonomy hold significant promise of transformation, but are also driving vigorous debate on how to en… (see more)sure that AI is safe, i.e., trustworthy, reliable, and secure. Building a trusted ecosystem is therefore essential – it helps people embrace AI with confidence and gives maximal space for innovation while avoiding backlash. This requires policymakers, industry, researchers and the broader public to collectively work toward securing positive outcomes from AI’s development. AI safety research is a key dimension. Given that the state of science today for building trustworthy AI does not fully cover all risks, accelerated investment in research is required to keep pace with commercially driven growth in system capabilities. Goals: The 2025 Singapore Conference on AI (SCAI): International Scientific Exchange on AI Safety aims to support research in this important space by bringing together AI scientists across geographies to identify and synthesise research priorities in AI safety. The result, The Singapore Consensus on Global AI Safety Research Priorities, builds on the International AI Safety Report-A (IAISR) chaired by Yoshua Bengio and backed by 33 governments. By adopting a defence-in-depth model, this document organises AI safety research domains into three types: challenges with creating trustworthy AI systems (Development), challenges with evaluating their risks (Assessment), and challenges with monitoring and intervening after deployment (Control). Through the Singapore Consensus, we hope to globally facilitate meaningful conversations between AI scientists and AI policymakers for maximally beneficial outcomes. Our goal is to enable more impactful R&D efforts to rapidly develop safety and evaluation mechanisms and foster a trusted ecosystem where AI is harnessed for the public good.
The Singapore Consensus on Global AI Safety Research Priorities
Luke Ong
Stuart Russell
Dawn Song
Max Tegmark
Lan Xue
Ya-Qin Zhang
Stephen Casper
Wan Sie Lee
Vanessa Wilfred
Vidhisha Balachandran
Fazl Barez
Michael Belinsky
Imane Bello
Malo Bourgon
Mark Brakel
Sim'eon Campos
Duncan Cass-Beggs … (see 67 more)
Jiahao Chen
Rumman Chowdhury
Kuan Chua Seah
Jeff Clune
Juntao Dai
Agnès Delaborde
Francisco Eiras
Joshua Engels
Jinyu Fan
Adam Gleave
Noah D. Goodman
Fynn Heide
Johannes Heidecke
Dan Hendrycks
Cyrus Hodes
Bryan Low Kian Hsiang
Minlie Huang
Sami Jawhar
Jingyu Wang
Adam Tauman Kalai
Meindert Kamphuis
Mohan S. Kankanhalli
Subhash Kantamneni
Mathias Bonde Kirk
Thomas Kwa
Jeffrey Ladish
Kwok-Yan Lam
Wan Lee Sie
Taewhi Lee
Xiaojian Li
Jiajun Liu
Chaochao Lu
Yifan Mai
Richard Mallah
Julian Michael
Nick Moës
Simon Möller
Kihyuk Nam
Kwan Yee Ng
Mark Nitzberg
Besmira Nushi
Sean O hEigeartaigh
Alejandro Ortega
Pierre Peigné
James Petrie
Nayat Sanchez-Pi
Sarah Schwettmann
Buck Shlegeris
Saad Siddiqui
Aradhana Sinha
Martín Soto
Cheston Tan
Dong Ting
William-Chandra Tjhi
Robert Trager
Brian Tse
H. AnthonyTungK.
John Willes
Denise Wong
W. Xu
Rongwu Xu
Yi Zeng
HongJiang Zhang
Djordje Zikelic
Rapidly improving AI capabilities and autonomy hold significant promise of transformation, but are also driving vigorous debate on how to en… (see more)sure that AI is safe, i.e., trustworthy, reliable, and secure. Building a trusted ecosystem is therefore essential – it helps people embrace AI with confidence and gives maximal space for innovation while avoiding backlash. This requires policymakers, industry, researchers and the broader public to collectively work toward securing positive outcomes from AI’s development. AI safety research is a key dimension. Given that the state of science today for building trustworthy AI does not fully cover all risks, accelerated investment in research is required to keep pace with commercially driven growth in system capabilities. Goals: The 2025 Singapore Conference on AI (SCAI): International Scientific Exchange on AI Safety aims to support research in this important space by bringing together AI scientists across geographies to identify and synthesise research priorities in AI safety. The result, The Singapore Consensus on Global AI Safety Research Priorities, builds on the International AI Safety Report-A (IAISR) chaired by Yoshua Bengio and backed by 33 governments. By adopting a defence-in-depth model, this document organises AI safety research domains into three types: challenges with creating trustworthy AI systems (Development), challenges with evaluating their risks (Assessment), and challenges with monitoring and intervening after deployment (Control). Through the Singapore Consensus, we hope to globally facilitate meaningful conversations between AI scientists and AI policymakers for maximally beneficial outcomes. Our goal is to enable more impactful R&D efforts to rapidly develop safety and evaluation mechanisms and foster a trusted ecosystem where AI is harnessed for the public good.
The Singapore Consensus on Global AI Safety Research Priorities
Luke Ong
Stuart Russell
Dawn Song
Max Tegmark
Lan Xue
Ya-Qin Zhang
Stephen Casper
Wan Sie Lee
Vanessa Wilfred
Vidhisha Balachandran
Fazl Barez
Michael Belinsky
Imane Bello
Malo Bourgon
Mark Brakel
Sim'eon Campos
Duncan Cass-Beggs … (see 67 more)
Jiahao Chen
Rumman Chowdhury
Kuan Chua Seah
Jeff Clune
Juntao Dai
Agnès Delaborde
Francisco Eiras
Joshua Engels
Jinyu Fan
Adam Gleave
Noah D. Goodman
Fynn Heide
Johannes Heidecke
Dan Hendrycks
Cyrus Hodes
Bryan Low Kian Hsiang
Minlie Huang
Sami Jawhar
Jingyu Wang
Adam Tauman Kalai
Meindert Kamphuis
Mohan S. Kankanhalli
Subhash Kantamneni
Mathias Bonde Kirk
Thomas Kwa
Jeffrey Ladish
Kwok-Yan Lam
Wan Lee Sie
Taewhi Lee
Xiaojian Li
Jiajun Liu
Chaochao Lu
Yifan Mai
Richard Mallah
Julian Michael
Nick Moës
Simon Möller
Kihyuk Nam
Kwan Yee Ng
Mark Nitzberg
Besmira Nushi
Sean O hEigeartaigh
Alejandro Ortega
Pierre Peigné
James Petrie
Nayat Sanchez-Pi
Sarah Schwettmann
Buck Shlegeris
Saad Siddiqui
Aradhana Sinha
Martín Soto
Cheston Tan
Dong Ting
William-Chandra Tjhi
Robert Trager
Brian Tse
H. AnthonyTungK.
John Willes
Denise Wong
W. Xu
Rongwu Xu
Yi Zeng
HongJiang Zhang
Djordje Zikelic
Rapidly improving AI capabilities and autonomy hold significant promise of transformation, but are also driving vigorous debate on how to en… (see more)sure that AI is safe, i.e., trustworthy, reliable, and secure. Building a trusted ecosystem is therefore essential – it helps people embrace AI with confidence and gives maximal space for innovation while avoiding backlash. This requires policymakers, industry, researchers and the broader public to collectively work toward securing positive outcomes from AI’s development. AI safety research is a key dimension. Given that the state of science today for building trustworthy AI does not fully cover all risks, accelerated investment in research is required to keep pace with commercially driven growth in system capabilities. Goals: The 2025 Singapore Conference on AI (SCAI): International Scientific Exchange on AI Safety aims to support research in this important space by bringing together AI scientists across geographies to identify and synthesise research priorities in AI safety. The result, The Singapore Consensus on Global AI Safety Research Priorities, builds on the International AI Safety Report-A (IAISR) chaired by Yoshua Bengio and backed by 33 governments. By adopting a defence-in-depth model, this document organises AI safety research domains into three types: challenges with creating trustworthy AI systems (Development), challenges with evaluating their risks (Assessment), and challenges with monitoring and intervening after deployment (Control). Through the Singapore Consensus, we hope to globally facilitate meaningful conversations between AI scientists and AI policymakers for maximally beneficial outcomes. Our goal is to enable more impactful R&D efforts to rapidly develop safety and evaluation mechanisms and foster a trusted ecosystem where AI is harnessed for the public good.
Inter-brain Synchronization in the Alpha Band during Minimal Tactile Interaction
Chen Lam Loh
Leonardo Zapata-Fonseca
Mark M. James
Tom Froese
Prompt learning with bounding box constraints for medical image segmentation.
Mehrdad Noori
Sahar Dastani
Christian Desrosiers
Pixel-wise annotations are notoriously labourious and costly to obtain in the medical domain. To mitigate this burden, weakly supervised app… (see more)roaches based on bounding box annotations-much easier to acquire-offer a practical alternative. Vision foundation models have recently shown noteworthy segmentation performance when provided with prompts such as points or bounding boxes. Prompt learning exploits these models by adapting them to downstream tasks and automating segmentation, thereby reducing user intervention. However, existing prompt learning approaches depend on fully annotated segmentation masks. This paper proposes a novel framework that combines the representational power of foundation models with the annotation efficiency of weakly supervised segmentation. More specifically, our approach automates prompt generation for foundation models using only bounding box annotations. Our proposed optimization scheme integrates multiple constraints derived from box annotations with pseudo-labels generated by the prompted foundation model. Extensive experiments across multi-modal datasets reveal that our weakly supervised method achieves an average Dice score of 84.90% in a limited data setting, outperforming existing fully-supervised and weakly-supervised approaches. The code will be available upon acceptance
Spatially and non-spatially tuned hippocampal neurons are linear perceptual and nonlinear memory encoders
Kaicheng Yan
Benjamin Corrigan
Roberto Gulli
Julio Martinez-Trujillo
Learning to combine top-down context and feed-forward representations under ambiguity with apical and basal dendrites
Guillaume Etter
Busra Tugce Gurbuz
Multi-Agent Matrix Games with Individual learners: How Exploration-Exploitation Strategies Impact the Emergence of Coordination
Coordination between independent learning agents in a multi-agent environment is an important problem where AI systems may impact each other… (see more)s learning process. In this paper, we study how individual agents converge to optimal equilibrium in multi-agent where coordination is necessary to achieve optimality. Specifically, we cover the case of coordination to maximize every individual payoffs and coordination to maximize the collective payoff (cooperation). We study the emergence of such coordination behaviours in two-players matrix games with unknown payoff matrices and noisy bandit feedback. We consider five different environments along with widely used deterministic and stochastic bandit strategies. We study how different learning strategies and observation noise influence convergence to the optimal equilibrium. Our results indicate that coordination often emerge more easily from interactions between deterministic agents, especially when they follow the same learning behaviour. However, stochastic learning strategies appear to be more robust in the presence of many optimal joint actions. Overall, noisy observations often help stabilizing learning behaviours.
Opening the Scope of Openness in AI
Tamara Paris
Relative Explanations for Contextual Problems with Endogenous Uncertainty: An Application to Competitive Facility Location
Jasone Ram'irez-Ayerbe
Relative Explanations for Contextual Problems with Endogenous Uncertainty: An Application to Competitive Facility Location
Jasone Ram'irez-Ayerbe
In this paper, we consider contextual stochastic optimization problems under endogenous uncertainty, where decisions affect the underlying d… (see more)istributions. To implement such decisions in practice, it is crucial to ensure that their outcomes are interpretable and trustworthy. To this end, we compute relative counterfactual explanations that provide practitioners with concrete changes in the contextual covariates required for a solution to satisfy specific constraints. Whereas relative explanations have been introduced in prior literature, to the best of our knowledge this is the first work focusing on problems with binary decision variables and endogenous uncertainty. We propose a methodology that uses the Wasserstein distance as a regularization term, which leads to a reduction in computation times compared to its unregularized counterpart. We illustrate the method using a choice-based competitive facility location problem and present numerical experiments that demonstrate its ability to efficiently compute sparse and interpretable explanations.
A Survey of State Representation Learning for Deep Reinforcement Learning
Representation learning methods are an important tool for addressing the challenges posed by complex observations spaces in sequential decis… (see more)ion making problems. Recently, many methods have used a wide variety of types of approaches for learning meaningful state representations in reinforcement learning, allowing better sample efficiency, generalization, and performance. This survey aims to provide a broad categorization of these methods within a model-free online setting, exploring how they tackle the learning of state representations differently. We categorize the methods into six main classes, detailing their mechanisms, benefits, and limitations. Through this taxonomy, our aim is to enhance the understanding of this field and provide a guide for new researchers. We also discuss techniques for assessing the quality of representations, and detail relevant future directions.