Portrait of AJung Moon

AJung Moon

Core Academic Member
Associate professor, McGill University, Department of Electrical and Computer Engineering
Research Topics
AI Ethics
AI Safety
Fairness
Human-AI interaction
Human-Centered AI
Human-Computer Interaction (HCI)
Human-Robot Interaction
Robot Ethics
Robotics

Biography

Ajung Moon is an experimental roboticist who investigates how robots and AI systems influence the way people move, behave and make decisions in order to help us design and deploy such autonomous intelligent systems more responsibly.

At McGill University, she is the director of the McGill Responsible Autonomy and Intelligent System Ethics (RAISE) lab. This is an interdisciplinary initiative that investigates the social and ethical implications of robots and AI systems, and explores what it means for engineers to be designing and deploying such systems responsibly for a better, technological future.

Current Students

PhD - McGill University
PhD - McGill University
PhD - McGill University
PhD - McGill University
Principal supervisor :
Postdoctorate - McGill University
PhD - McGill University

Publications

Measuring What Matters: Connecting AI Ethics Evaluations to System Attributes, Hazards, and Harms
Measuring What Matters: Connecting AI Ethics Evaluations to System Attributes, Hazards, and Harms
Over the past decade, an ecosystem of measures has emerged to evaluate the social and ethical implications of AI systems, largely shaped by … (see more)high-level ethics principles. These measures are developed and used in fragmented ways, without adequate attention to how they are situated in AI systems. In this paper, we examine how existing measures used in the computing literature map to AI system components, attributes, hazards, and harms. Our analysis draws on a scoping review resulting in nearly 800 measures corresponding to 11 AI ethics principles. We find that most measures focus on four principles – fairness, transparency, privacy, and trust – and primarily assess model or output system components. Few measures account for interactions across system elements, and only a narrow set of hazards is typically considered for each harm type. Many measures are disconnected from where harm is experienced and lack guidance for setting meaningful thresholds. These patterns reveal how current evaluation practices remain fragmented, measuring in pieces rather than capturing how harms emerge across systems. Framing measures with respect to system attributes, hazards, and harms can strengthen regulatory oversight, support actionable practices in industry, and ground future research in systems-level understanding.
Measuring What Matters: Connecting AI Ethics Evaluations to System Attributes, Hazards, and Harms
Over the past decade, an ecosystem of measures has emerged to evaluate the social and ethical implications of AI systems, largely shaped by … (see more)high-level ethics principles. These measures are developed and used in fragmented ways, without adequate attention to how they are situated in AI systems. In this paper, we examine how existing measures used in the computing literature map to AI system components, attributes, hazards, and harms. Our analysis draws on a scoping review resulting in nearly 800 measures corresponding to 11 AI ethics principles. We find that most measures focus on four principles – fairness, transparency, privacy, and trust – and primarily assess model or output system components. Few measures account for interactions across system elements, and only a narrow set of hazards is typically considered for each harm type. Many measures are disconnected from where harm is experienced and lack guidance for setting meaningful thresholds. These patterns reveal how current evaluation practices remain fragmented, measuring in pieces rather than capturing how harms emerge across systems. Framing measures with respect to system attributes, hazards, and harms can strengthen regulatory oversight, support actionable practices in industry, and ground future research in systems-level understanding.
Opening the Scope of Openness in AI
Tamara Paris
Roboethics for everyone – A hands-on teaching module for K-12 and beyond
In this work, we address the evolving landscape of roboethics, expanding beyond physical safety to encompass broader societal implications. … (see more)Recognizing the siloed nature of existing initiatives to teach and inform ethical implications of artificial intelligence (AI) and robotic systems, we present a roboethics teaching module designed for K-12 students and general audiences. The module focuses on the high-level analysis of the interplay between robot behaviour design choices and ethics, using everyday social dilemmas. We delivered the module in a workshop to high school students in Montreal, Canada. From this experience, we observed that the module successfully fostered critical thinking and ethical considerations in students, without requiring advanced technical knowledge. This teaching module holds promise to reach a wider range of populations. We urge the education community to explore similar approaches and engage in interdisciplinary training opportunities regarding the ethical implications of AI and robotics.
Roboethics for everyone – A hands-on teaching module for K-12 and beyond
In this work, we address the evolving landscape of roboethics, expanding beyond physical safety to encompass broader societal implications. … (see more)Recognizing the siloed nature of existing initiatives to teach and inform ethical implications of artificial intelligence (AI) and robotic systems, we present a roboethics teaching module designed for K-12 students and general audiences. The module focuses on the high-level analysis of the interplay between robot behaviour design choices and ethics, using everyday social dilemmas. We delivered the module in a workshop to high school students in Montreal, Canada. From this experience, we observed that the module successfully fostered critical thinking and ethical considerations in students, without requiring advanced technical knowledge. This teaching module holds promise to reach a wider range of populations. We urge the education community to explore similar approaches and engage in interdisciplinary training opportunities regarding the ethical implications of AI and robotics.
From Silos to Systems: Process-Oriented Hazard Analysis for AI Systems
How different mental models of AI-based writing assistants impact writers’ interactions with them
Su Lin Blodgett
Q. Vera Liao
Investigating Robot Influence on Human Behaviour By Leveraging Entrainment Effects
Perspectives on Robotic Systems for the Visually Impaired
Socially Assistive Robots for patients with Alzheimer's Disease: A scoping review.
Socially Assistive Robots for patients with Alzheimer's Disease: A scoping review.