Portrait of Shalaleh Rismani is unavailable

Shalaleh Rismani

Postdoctorate - McGill University
Supervisor
Co-supervisor
Research Topics
AI Ethics
AI Safety
Creativity
Human-AI interaction
Human-Centered AI
Human-Computer Interaction (HCI)
Human-Robot Interaction
Responsible AI
Risk Analysis
Robot Ethics
Safety Engineering

Publications

International AI Safety Report Second Key Update: Technical Safeguards and Risk Management
Stephen Clare
Carina Prunkl
Maksym Andriushchenko
BEN BUCKNALL
Philip Fox
Nestor Maslej
Conor McGlynn
Malcolm Murray
Stephen Casper
Jessica Newman
Daniel Privitera
Daron Acemoglu
Thomas G. Dietterich
Fredrik Heintz
Geoffrey Hinton
Nick Jennings
Susan Leavy … (see 17 more)
Teresa Ludermir
Vidushi Marda
Helen Margetts
John McDermid
Jane Munga
Arvind Narayanan
Alondra Nelson
Clara Neppel
Sarvapali D. (Gopal) Ramchurn
Stuart Russell
Marietje Schaake
Bernhard Schölkopf
Alvaro Soto
Lee Tiedrich
Andrew Yao
Ya-Qin Zhang
This is the Second Key Update to the 2025 International AI Safety Report. The First Key Update (1) discussed developments in the capabilitie… (see more)s of general-purpose AI models and systems and associated risks. This Key Update covers how various actors, including researchers, companies, and governments, are approaching risk management and technical mitigations for AI. The past year has seen important developments in AI risk management, including better techniques for training safer models and monitoring their outputs. While this represents tangible progress, significant gaps remain. It is often uncertain how effective current measures are at preventing harms, and effectiveness varies across time and applications. There are many opportunities to further strengthen existing safeguard techniques and to develop new ones. This Key Update provides a concise overview of critical developments in risk management practices and technical risk mitigation since the publication of the 2025 AI Safety Report in January. It highlights where progress is being made and where gaps remain. Above all, it aims to support policymakers, researchers, and the public in navigating a rapidly changing environment, helping them to make informed and timely decisions about the governance of general-purpose AI. Professor Yoshua BengioUniversité de Montréal / LawZero /Mila – Quebec AI Institute & Chair
International AI Safety Report: First Key Update, Capabilities and Risk Implications
Prof. Yoshua Bengio
Stephen Clare
Carina Prunkl
Maksym Andriushchenko
BEN BUCKNALL
Philip Fox
Tiancheng Hu
Cameron Jones
Sam Manning
Nestor Maslej
Vasilios Mavroudis
Conor McGlynn
Malcolm Murray
Charlotte Stix
Lucia Velasco
Nicole Wheeler
Daniel Privitera
Daron Acemoglu … (see 36 more)
Thomas G. Dietterich
Fredrik Heintz
Geoffrey Hinton
Nick Jennings
Susan Leavy
Teresa Ludermir
Vidushi Marda
Helen Margetts
John McDermid
Jane Munga
Arvind Narayanan
Alondra Nelson
Clara Neppel
Sarvapali D. (Gopal) Ramchurn
Stuart Russell
Marietje Schaake
Bernhard Schölkopf
Alvaro Soto
Lee Tiedrich
Andrew Yao
Ya-Qin Zhang
Lambrini Das
Claire Dennis
Arianna Dini
Freya Hempleman
Samuel Kenny
Patrick King
Hannah Merchant
Jamie-Day Rawal
Rose Woolhouse
The field of AI is moving too quickly for a single yearly publication to keep pace. Significant changes can occur on a timescale of months, … (see more)sometimes weeks. This is why we are releasing Key Updates: shorter, focused reports that highlight the most important developments between full editions of the International AI Safety Report. With these updates, we aim to provide policymakers, researchers, and the public with up-to-date information to support wise decisions about AI governance. This first Key Update focuses on areas where especially significant changes have occurred since January 2025: advances in general-purpose AI systems' capabilities, and the implications for several critical risks. New training techniques have enabled AI systems to reason step-by-step and operate autonomously for longer periods, allowing them to tackle more kinds of work. However, these same advances create new challenges across biological risks, cyber security, and oversight of AI systems themselves. The International AI Safety Report is intended to help readers assess, anticipate, and manage risks from general-purpose AI systems. These Key Updates ensure that critical developments receive timely attention as the field rapidly evolves.
Measuring What Matters: Connecting AI Ethics Evaluations to System Attributes, Hazards, and Harms
Over the past decade, an ecosystem of measures has emerged to evaluate the social and ethical implications of AI systems, largely shaped by … (see more)high-level ethics principles. These measures are developed and used in fragmented ways, without adequate attention to how they are situated in AI systems. In this paper, we examine how existing measures used in the computing literature map to AI system components, attributes, hazards, and harms. Our analysis draws on a scoping review resulting in nearly 800 measures corresponding to 11 AI ethics principles. We find that most measures focus on four principles – fairness, transparency, privacy, and trust – and primarily assess model or output system components. Few measures account for interactions across system elements, and only a narrow set of hazards is typically considered for each harm type. Many measures are disconnected from where harm is experienced and lack guidance for setting meaningful thresholds. These patterns reveal how current evaluation practices remain fragmented, measuring in pieces rather than capturing how harms emerge across systems. Framing measures with respect to system attributes, hazards, and harms can strengthen regulatory oversight, support actionable practices in industry, and ground future research in systems-level understanding.
Roboethics for Everyone – A Hands-On Teaching Module for K-12 and Beyond
In this work, we address the evolving landscape of roboethics, expanding beyond physical safety to encompass broader societal implications. … (see more)Recognizing the siloed nature of existing initiatives to teach and inform ethical implications of artificial intelligence (AI) and robotic systems, we present a roboethics teaching module designed for K-12 students and general audiences. The module focuses on the high-level analysis of the interplay between robot behaviour design choices and ethics, using everyday social dilemmas. We delivered the module in a workshop to high school students in Montreal, Canada. From this experience, we observed that the module successfully fostered critical thinking and ethical considerations in students, without requiring advanced technical knowledge. This teaching module holds promise to reach a wider range of populations. We urge the education community to explore similar approaches and engage in interdisciplinary training opportunities regarding the ethical implications of AI and robotics.
From Silos to Systems: Process-Oriented Hazard Analysis for AI Systems
How different mental models of AI-based writing assistants impact writers’ interactions with them
A.R. Olteanu
Q. Vera Liao
Driving into the Loop: Mapping Automation Bias and Liability Issues for Advanced Driver Assistance Systems
Katie Szilagyi
Jason Millar
Beyond the ML Model: Applying Safety Engineering Frameworks to Text-to-Image Development
Renee Shelby
Andrew J Smart
Renelito Delos Santos
Identifying potential social and ethical risks in emerging machine learning (ML) models and their applications remains challenging. In this … (see more)work, we applied two well-established safety engineering frameworks (FMEA, STPA) to a case study involving text-to-image models at three stages of the ML product development pipeline: data processing, integration of a T2I model with other models, and use. Results of our analysis demonstrate the safety frameworks – both of which are not designed explicitly examine social and ethical risks – can uncover failure and hazards that pose social and ethical risks. We discovered a broad range of failures and hazards (i.e., functional, social, and ethical) by analyzing interactions (i.e., between different ML models in the product, between the ML product and user, and between development teams) and processes (i.e., preparation of training data or workflows for using an ML service/product). Our findings underscore the value and importance of examining beyond an ML model in examining social and ethical risks, especially when we have minimal information about an ML model.
Sociotechnical Harms of Algorithmic Systems: Scoping a Taxonomy for Harm Reduction
Renee Shelby
Kathryn Henne
Paul Nicholas
N'Mah Yilla-Akbari
Jess Gallegos
Andrew J Smart
Emilio Garcia
Gurleen Virk
What does it mean to be a responsible AI practitioner: An ontology of roles and skills
With the growing need to regulate AI systems across a wide variety of application domains, a new set of occupations has emerged in the indus… (see more)try. The so-called responsible Artificial Intelligence (AI) practitioners or AI ethicists are generally tasked with interpreting and operationalizing best practices for ethical and safe design of AI systems. Due to the nascent nature of these roles, however, it is unclear to future employers and aspiring AI ethicists what specific function these roles serve and what skills are necessary to serve the functions. Without clarity on these, we cannot train future AI ethicists with meaningful learning objectives. In this work, we examine what responsible AI practitioners do in the industry and what skills they employ on the job. We propose an ontology of existing roles alongside skills and competencies that serve each role. We created this ontology by examining the job postings for such roles over a two-year period (2020-2022) and conducting expert interviews with fourteen individuals who currently hold such a role in the industry. Our ontology contributes to business leaders looking to build responsible AI teams and provides educators with a set of competencies that an AI ethics curriculum can prioritize.
Harms from Increasingly Agentic Algorithmic Systems
ALVA MARKELIUS
CHRIS PANG
Dmitrii Krasheninnikov
Lauro Langosco
ZHONGHAO HE
Yawen Duan
MICAH CARROLL
ALEX MAYHEW
KATHERINE COLLINS
John Burden
WANRU ZHAO
KONSTANTINOS VOUDOURIS
UMANG BHATT
Adrian Weller … (see 2 more)
David Krueger
Research in Fairness, Accountability, Transparency, and Ethics (FATE) has established many sources and forms of algorithmic harm, in domains… (see more) as diverse as health care, finance, policing, and recommendations. Much work remains to be done to mitigate the serious harms of these systems, particularly those disproportionately affecting marginalized communities. Despite these ongoing harms, new systems are being developed and deployed which threaten the perpetuation of the same harms and the creation of novel ones. In response, the FATE community has emphasized the importance of anticipating harms. Our work focuses on the anticipation of harms from increasingly agentic systems. Rather than providing a definition of agency as a binary property, we identify 4 key characteristics which, particularly in combination, tend to increase the agency of a given algorithmic system: underspecification, directness of impact, goal-directedness, and long-term planning. We also discuss important harms which arise from increasing agency -- notably, these include systemic and/or long-range impacts, often on marginalized stakeholders. We emphasize that recognizing agency of algorithmic systems does not absolve or shift the human responsibility for algorithmic harms. Rather, we use the term agency to highlight the increasingly evident fact that ML systems are not fully under human control. Our work explores increasingly agentic algorithmic systems in three parts. First, we explain the notion of an increase in agency for algorithmic systems in the context of diverse perspectives on agency across disciplines. Second, we argue for the need to anticipate harms from increasingly agentic systems. Third, we discuss important harms from increasingly agentic systems and ways forward for addressing them. We conclude by reflecting on implications of our work for anticipating algorithmic harms from emerging systems.
From Plane Crashes to Algorithmic Harm: Applicability of Safety Engineering Frameworks for Responsible ML
Renee Shelby
Andrew J Smart
Edgar Jatho
Joshua A. Kroll