Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems
David Dalrymple
David
Joar Max Viktor Skalse
Stuart Russell
Max Tegmark
Sanjit A. Seshia
Steve Omohundro
Christian Szegedy
Ben Goldhaber
Nora Ammann
Alessandro Abate
Joe Halpern
Clark Barrett
Ding Zhao
Zhi-Xuan Tan
Jeannette Wing
Joshua B. Tenenbaum
Ensuring that AI systems reliably and robustly avoid harmful or dangerous behaviours is a crucial challenge, especially for AI systems with … (voir plus)a high degree of autonomy and general intelligence, or systems used in safety-critical contexts. In this paper, we will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI. The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees. This is achieved by the interplay of three core components: a world model (which provides a mathematical description of how the AI system affects the outside world), a safety specification (which is a mathematical description of what effects are acceptable), and a verifier (which provides an auditable proof certificate that the AI satisfies the safety specification relative to the world model). We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them. We also argue for the necessity of this approach to AI safety, and for the inadequacy of the main alternative approaches.
Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems
David Dalrymple
Joar Max Viktor Skalse
Stuart Russell
Max Tegmark
Sanjit A. Seshia
Steve Omohundro
Christian Szegedy
Ben Goldhaber
Nora Ammann
Alessandro Abate
Joe Halpern
Clark Barrett
Ding Zhao
Zhi-Xuan Tan
Jeannette Wing
Joshua B. Tenenbaum
Ensuring that AI systems reliably and robustly avoid harmful or dangerous behaviours is a crucial challenge, especially for AI systems with … (voir plus)a high degree of autonomy and general intelligence, or systems used in safety-critical contexts. In this paper, we will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI. The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees. This is achieved by the interplay of three core components: a world model (which provides a mathematical description of how the AI system affects the outside world), a safety specification (which is a mathematical description of what effects are acceptable), and a verifier (which provides an auditable proof certificate that the AI satisfies the safety specification relative to the world model). We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them. We also argue for the necessity of this approach to AI safety, and for the inadequacy of the main alternative approaches.
Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems
David Dalrymple
Joar Max Viktor Skalse
Stuart Russell
Max Tegmark
Sanjit A. Seshia
Steve Omohundro
Christian Szegedy
Ben Goldhaber
Nora Ammann
Alessandro Abate
Joe Halpern
Clark Barrett
Ding Zhao
Zhi-Xuan Tan
Jeannette Wing
Joshua B. Tenenbaum
Ensuring that AI systems reliably and robustly avoid harmful or dangerous behaviours is a crucial challenge, especially for AI systems with … (voir plus)a high degree of autonomy and general intelligence, or systems used in safety-critical contexts. In this paper, we will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI. The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees. This is achieved by the interplay of three core components: a world model (which provides a mathematical description of how the AI system affects the outside world), a safety specification (which is a mathematical description of what effects are acceptable), and a verifier (which provides an auditable proof certificate that the AI satisfies the safety specification relative to the world model). We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them. We also argue for the necessity of this approach to AI safety, and for the inadequacy of the main alternative approaches.
Interpretability Needs a New Paradigm
Andreas Madsen
Himabindu Lakkaraju
The Canadian VirusSeq Data Portal & Duotang: open resources for SARS-CoV-2 viral sequences and genomic epidemiology
Erin E. Gill
Baofeng Jia
Carmen Lia Murall
Raphael Poujol
Muhammad Zohaib Anwar
Nithu Sara John
Justin Richardsson
Ashley Hobb
Abayomi S. Olabode
Alexandru Lepsa
Ana T. Duggan
Andrea D. Tyler
Arnaud N’Guessan
Atul Kachru
Brandon Chan
Catherine Yoshida
Christina K. Yung
David Bujold
Dusan Andric
Edmund Su … (voir 50 de plus)
Emma J. Griffiths
Gary Van Domselaar
Gordon W. Jolly
Heather K.E. Ward
Henrich Feher
Jared Baker
Jared T. Simpson
Jaser Uddin
Jiannis Ragoussis
Jon Eubank
Jörg H. Fritz
José Héctor Gálvez
Karen Fang
Kim Cullion
Leonardo Rivera
Linda Xiang
Matthew A. Croxen
Mitchell Shiell
Natalie Prystajecky
Pierre-Olivier Quirion
Rosita Bajari
Samantha Rich
Samira Mubareka
Sandrine Moreira
Scott Cain
Steven G. Sutcliffe
Susanne A. Kraemer
Yann Joly
Yelizar Alturmessov
CPHLN consortium
CanCOGeN consortium
VirusSeq Data Portal Academic
Health Network
Marc Fiume
Terrance P. Snutch
Cindy Bell
Catalina Lopez-Correa
Jeffrey B. Joy
Caroline Colijin
Paul M.K. Gordon
William W.L. Hsiao
Art F.Y. Poon
Natalie C. Knox
Mélanie Courtot
Lincoln Stein
Sarah P. Otto
Guillaume Bourque
B. Jesse Shapiro
Fiona S.L. Brinkman
Quantifying neurodegeneration of the cervical cord and brain in degenerative cervical myelopathy: A multicentre study using quantitative <scp>magnetic resonance imaging</scp>
Patrick Freund
Viveka Boller
Tim M. Emmenegger
Muhammad Akbar
Markus Hupp
Nikolai Pfender
Claudia A. M. Gandini Wheeler-Kingshott
Michael G. Fehlings
Armin Curt
Maryam Seif
Quantifying neurodegeneration of the cervical cord and brain in degenerative cervical myelopathy: A multicentre study using quantitative magnetic resonance imaging
Patrick Freund
Viveka Boller
Tim M. Emmenegger
Muhammad Akbar
Markus Hupp
Nikolai Pfender
Claudia A. M. Gandini Wheeler-Kingshott
Michael G. Fehlings
Armin Curt
Maryam Seif
Simultaneous assessment of neurodegeneration in both the cervical cord and brain across multiple centres can enhance the effectiveness of cl… (voir plus)inical trials. Thus, this study aims to simultaneously assess microstructural changes in the cervical cord and brain above the stenosis in degenerative cervical myelopathy (DCM) using quantitative magnetic resonance imaging (MRI) in a multicentre study.
Quantifying neurodegeneration of the cervical cord and brain in degenerative cervical myelopathy: A multicentre study using quantitative magnetic resonance imaging
Patrick Freund
Viveka Boller
Tim M. Emmenegger
Muhammad Akbar
Markus Hupp
Nikolai Pfender
Claudia A. M. Gandini Wheeler-Kingshott
Michael G. Fehlings
Armin Curt
Maryam Seif
Simultaneous assessment of neurodegeneration in both the cervical cord and brain across multiple centres can enhance the effectiveness of cl… (voir plus)inical trials. Thus, this study aims to simultaneously assess microstructural changes in the cervical cord and brain above the stenosis in degenerative cervical myelopathy (DCM) using quantitative magnetic resonance imaging (MRI) in a multicentre study.
Quantifying neurodegeneration of the cervical cord and brain in degenerative cervical myelopathy: A multicentre study using quantitative magnetic resonance imaging
Patrick Freund
Viveka Boller
Tim M. Emmenegger
Muhammad Akbar
Markus Hupp
Nikolai Pfender
Claudia A. M. Gandini Wheeler-Kingshott
Michael G. Fehlings
Armin Curt
Maryam Seif
Simultaneous assessment of neurodegeneration in both the cervical cord and brain across multiple centres can enhance the effectiveness of cl… (voir plus)inical trials. Thus, this study aims to simultaneously assess microstructural changes in the cervical cord and brain above the stenosis in degenerative cervical myelopathy (DCM) using quantitative magnetic resonance imaging (MRI) in a multicentre study.
TorchDriveEnv: A Reinforcement Learning Benchmark for Autonomous Driving with Reactive, Realistic, and Diverse Non-Playable Characters
Jonathan Wilder Lavington
Ke Zhang
Vasileios Lioutas
Matthew Niedoba
Yunpeng Liu
Dylan Green
Saeid Naderiparizi
Xiaoxuan Liang
Setareh Dabiri
Adam Ścibior
Berend Zwartsenberg
Frank Wood
The training, testing, and deployment, of autonomous vehicles requires realistic and efficient simulators. Moreover, because of the high var… (voir plus)iability between different problems presented in different autonomous systems, these simulators need to be easy to use, and easy to modify. To address these problems we introduce TorchDriveSim and its benchmark extension TorchDriveEnv. TorchDriveEnv is a lightweight reinforcement learning benchmark programmed entirely in Python, which can be modified to test a number of different factors in learned vehicle behavior, including the effect of varying kinematic models, agent types, and traffic control patterns. Most importantly unlike many replay based simulation approaches, TorchDriveEnv is fully integrated with a state of the art behavioral simulation API. This allows users to train and evaluate driving models alongside data driven Non-Playable Characters (NPC) whose initializations and driving behavior are reactive, realistic, and diverse. We illustrate the efficiency and simplicity of TorchDriveEnv by evaluating common reinforcement learning baselines in both training and validation environments. Our experiments show that TorchDriveEnv is easy to use, but difficult to solve.
TorchDriveEnv: A Reinforcement Learning Benchmark for Autonomous Driving with Reactive, Realistic, and Diverse Non-Playable Characters
Jonathan Wilder Lavington
Ke Zhang
Vasileios Lioutas
Matthew Niedoba
Yunpeng Liu
Dylan Green
Saeid Naderiparizi
Xiaoxuan Liang
Setareh Dabiri
Adam Ścibior
Berend Zwartsenberg
Frank Wood
Deep Clustering with Self-Supervision using Pairwise Similarities
Mohammadreza Sadeghi
Deep clustering incorporates embedding into clustering to find a lower-dimensional space appropriate for clustering. In this paper, we propo… (voir plus)se a novel deep clustering framework with self-supervision using pairwise similarities (DCSS). The proposed method consists of two successive phases. In the first phase, we propose to form hypersphere-like groups of similar data points, i.e. one hypersphere per cluster, employing an autoencoder that is trained using cluster-specific losses. The hyper-spheres are formed in the autoencoder's latent space. In the second phase, we propose to employ pairwise similarities to create a