Portrait de Nitarshan Rajkumar n'est pas disponible

Nitarshan Rajkumar

Alumni

Publications

Open Problems in Technical AI Governance
Anka Reuel
Benjamin Bucknall
Stephen Casper
Timothy Fist
Lisa Soder
Onni Aarne
Lewis Hammond
Lujain Ibrahim
Peter Wills
Markus Anderljung
Ben Garfinkel
Lennart Heim
Andrew Trask
Gabriel Mukobi
Rylan Schaeffer
Mauricio Baker
Sara Hooker
Irene Solaiman
Alexandra Luccioni
Nicolas Moës
Jeffrey Ladish
David Bau
Paul Bricman
Neel Guha
Jessica Newman
Tobin South
Alex Pentland
Sanmi Koyejo
Mykel Kochenderfer
Robert Trager
AI progress is creating a growing range of risks and opportunities, but it is often unclear how they should be navigated. In many cases, the… (voir plus) barriers and uncertainties faced are at least partly technical. Technical AI governance, referring to technical analysis and tools for supporting the effective governance of AI, seeks to address such challenges. It can help to (a) identify areas where intervention is needed, (b) identify and assess the efficacy of potential governance actions, and (c) enhance governance options by designing mechanisms for enforcement, incentivization, or compliance. In this paper, we explain what technical AI governance is, why it is important, and present a taxonomy and incomplete catalog of its open problems. This paper is intended as a resource for technical researchers or research funders looking to contribute to AI governance.
IDs for AI Systems
Noam Kolt
Peter Wills
Usman Anwar
Christian Schroeder de Witt
Lewis Hammond
Lennart Heim
Markus Anderljung
IDs for AI Systems
Noam Kolt
Peter Wills
Usman Anwar
Christian Schroeder de Witt
Lewis Hammond
Lennart Heim
Markus Anderljung
IDs for AI Systems
Noam Kolt
Peter Wills
Usman Anwar
Christian Schroeder de Witt
Lewis Hammond
Lennart Heim
Markus Anderljung
AI systems are increasingly pervasive, yet information needed to decide whether and how to engage with them may not exist or be accessible. … (voir plus)A user may not be able to verify whether a system has certain safety certifications. An investigator may not know whom to investigate when a system causes an incident. It may not be clear whom to contact to shut down a malfunctioning system. Across a number of domains, IDs address analogous problems by identifying particular entities (e.g., a particular Boeing 747) and providing information about other entities of the same class (e.g., some or all Boeing 747s). We propose a framework in which IDs are ascribed to instances of AI systems (e.g., a particular chat session with Claude 3), and associated information is accessible to parties seeking to interact with that system. We characterize IDs for AI systems, provide concrete examples where IDs could be useful, argue that there could be significant demand for IDs from key actors, analyze how those actors could incentivize ID adoption, explore a potential implementation of our framework for deployers of AI systems, and highlight limitations and risks. IDs seem most warranted in settings where AI systems could have a large impact upon the world, such as in making financial transactions or contacting real humans. With further study, IDs could help to manage a world where AI systems pervade society.
IDs for AI Systems
Noam Kolt
Peter Wills
Usman Anwar
Christian Schroeder de Witt
Lewis Hammond
Lennart Heim
Markus Anderljung
AI systems are increasingly pervasive, yet information needed to decide whether and how to engage with them may not exist or be accessible. … (voir plus)A user may not be able to verify whether a system has certain safety certifications. An investigator may not know whom to investigate when a system causes an incident. It may not be clear whom to contact to shut down a malfunctioning system. Across a number of domains, IDs address analogous problems by identifying particular entities (e.g., a particular Boeing 747) and providing information about other entities of the same class (e.g., some or all Boeing 747s). We propose a framework in which IDs are ascribed to instances of AI systems (e.g., a particular chat session with Claude 3), and associated information is accessible to parties seeking to interact with that system. We characterize IDs for AI systems, provide concrete examples where IDs could be useful, argue that there could be significant demand for IDs from key actors, analyze how those actors could incentivize ID adoption, explore a potential implementation of our framework for deployers of AI systems, and highlight limitations and risks. IDs seem most warranted in settings where AI systems could have a large impact upon the world, such as in making financial transactions or contacting real humans. With further study, IDs could help to manage a world where AI systems pervade society.
IDs for AI Systems
Noam Kolt
Peter Wills
Usman Anwar
Christian Schroeder de Witt
Lewis Hammond
Lennart Heim
Markus Anderljung
AI systems are increasingly pervasive, yet information needed to decide whether and how to engage with them may not exist or be accessible. … (voir plus)A user may not be able to verify whether a system has certain safety certifications. An investigator may not know whom to investigate when a system causes an incident. It may not be clear whom to contact to shut down a malfunctioning system. Across a number of domains, IDs address analogous problems by identifying particular entities (e.g., a particular Boeing 747) and providing information about other entities of the same class (e.g., some or all Boeing 747s). We propose a framework in which IDs are ascribed to instances of AI systems (e.g., a particular chat session with Claude 3), and associated information is accessible to parties seeking to interact with that system. We characterize IDs for AI systems, provide concrete examples where IDs could be useful, argue that there could be significant demand for IDs from key actors, analyze how those actors could incentivize ID adoption, explore a potential implementation of our framework for deployers of AI systems, and highlight limitations and risks. IDs seem most warranted in settings where AI systems could have a large impact upon the world, such as in making financial transactions or contacting real humans. With further study, IDs could help to manage a world where AI systems pervade society.
IDs for AI Systems
Noam Kolt
Peter Wills
Usman Anwar
Christian Schroeder de Witt
Lewis Hammond
Lennart Heim
Markus Anderljung
IDs for AI Systems
Noam Kolt
Peter Wills
Usman Anwar
Christian Schroeder de Witt
Lewis Hammond
Lennart Heim
Markus Anderljung
IDs for AI Systems
Noam Kolt
Peter Wills
Usman Anwar
Christian Schroeder de Witt
Lewis Hammond
Lennart Heim
Markus Anderljung
Visibility into AI Agents
Carson Ezell
Max Kaufmann
Kevin Wei
Lewis Hammond
Herbie Bradley
Emma Bluemke
Noam Kolt
Lennart Heim
Markus Anderljung
Visibility into AI Agents
Carson Ezell
Max Kaufmann
Kevin Wei
Lewis Hammond
Herbie Bradley
Emma Bluemke
Noam Kolt
Lennart Heim
Markus Anderljung
Increased delegation of commercial, scientific, governmental, and personal activities to AI agents—systems capable of pursuing complex goa… (voir plus)ls with limited supervision—may exacerbate existing societal risks and introduce new risks. Understanding and mitigating these risks involves critically evaluating existing governance structures, revising and adapting these structures where needed, and ensuring accountability of key stakeholders. Information about where, why, how, and by whom certain AI agents are used, which we refer to as visibility, is critical to these objectives. In this paper, we assess three categories of measures to increase visibility into AI agents: agent identifiers, real-time monitoring, and activity logging. For each, we outline potential implementations that vary in intrusiveness and informativeness. We analyze how the measures apply across a spectrum of centralized through decentralized deployment contexts, accounting for various actors in the supply chain including hardware and software service providers. Finally, we discuss the implications of our measures for privacy and concentration of power. Further work into understanding the measures and mitigating their negative impacts can help to build a foundation for the governance of AI agents.
Visibility into AI Agents
Carson Ezell
Max Kaufmann
Kevin Wei
Lewis Hammond
Herbie Bradley
Emma Bluemke
Noam Kolt
Lennart Heim
Markus Anderljung
Increased delegation of commercial, scientific, governmental, and personal activities to AI agents—systems capable of pursuing complex goa… (voir plus)ls with limited supervision—may exacerbate existing societal risks and introduce new risks. Understanding and mitigating these risks involves critically evaluating existing governance structures, revising and adapting these structures where needed, and ensuring accountability of key stakeholders. Information about where, why, how, and by whom certain AI agents are used, which we refer to as visibility, is critical to these objectives. In this paper, we assess three categories of measures to increase visibility into AI agents: agent identifiers, real-time monitoring, and activity logging. For each, we outline potential implementations that vary in intrusiveness and informativeness. We analyze how the measures apply across a spectrum of centralized through decentralized deployment contexts, accounting for various actors in the supply chain including hardware and software service providers. Finally, we discuss the implications of our measures for privacy and concentration of power. Further work into understanding the measures and mitigating their negative impacts can help to build a foundation for the governance of AI agents.