Portrait de AJung Moon

AJung Moon

Membre académique principal
Professeure agrégée, McGill University, Département de génie électrique et informatique
Sujets de recherche
Équité
Éthique de l'IA
Éthique des robots
IA centrée sur l'humain
Interaction humain-IA
Interaction humain-machine (IHM)
Interaction humain-robot
Robotique
Sécurité de l'IA

Biographie

AJung Moon est une roboticienne expérimentale. Elle étudie comment les robots et les systèmes d'intelligence artificielle influencent la façon dont les gens se déplacent, se comportent et prennent des décisions, afin de déterminer comment nous pouvons concevoir et déployer de tels systèmes intelligents autonomes de manière plus responsable.

À l'Université McGill, elle est directrice du laboratoire McGill Responsible Autonomy & Intelligent System Ethics (RAISE). Le laboratoire RAISE est un groupe interdisciplinaire qui étudie les implications sociales et éthiques des robots et des systèmes d'intelligence artificielle, et explore ce que cela signifie pour les ingénieurs de concevoir et de déployer ces systèmes de manière responsable pour un avenir technologique meilleur.

Étudiants actuels

Doctorat - McGill
Doctorat - McGill
Maîtrise recherche - McGill
Doctorat - McGill
Doctorat - McGill
Superviseur⋅e principal⋅e :
Doctorat - McGill

Publications

From Use to Oversight: How Mental Models Influence User Behavior and Output in AI Writing Assistants
AI-based writing assistants are ubiquitous, yet little is known about how users' mental models shape their use. We examine two types of ment… (voir plus)al models -- functional or related to what the system does, and structural or related to how the system works -- and how they affect control behavior -- how users request, accept, or edit AI suggestions as they write -- and writing outcomes. We primed participants (
Responsible Humanoids: A Contradiction in Terms?
Séverin Lemaignan
Simon Coghlan
Emily C. Collins
Vanessa Evers
Nico Hochgeschwender
Sara Ljungblad
Michael Milford
Sarah Moth-Lund Christensen
Francisco J. Rodríguez Lera
Pericle Salvini
Yi Yang
In this paper, we critically examine the current "humanoid hype" in robotics, questioning its alignment with responsible robotics principles… (voir plus). While technical challenges drive internal fascination, the pervasive public image of humanoids demands deeper HRI engagement. We explore how responsible robotics concepts, such as privacy, dignity, and trust, are uniquely challenged or overlooked in the pursuit of anthropomorphic robot forms. By dissecting this hype, and mapping the main findings of the recently-published Roadmap for Responsible Robotics to the humanoids field, we aim to move beyond technical form-factor obsessions to understand the true societal implications and identify potential blind spots for the HRI community.
Understanding Social Appropriateness Perceptions in Secondary Users of Domestic Robots
Seol Han
Rachel Ruddy
A new generation of robots are being developed to enter our homes in a matter of months. But has the industry appropriately accounted for th… (voir plus)e complexities of the social environment that we call home? We conducted an exploratory design workshop to examine what secondary users—those who are not expected to be owners but nonetheless daily users—deem to be socially appropriate behavior of a domestic robot. A total of 90 students from Mexico participated in the study. By analyzing they define and reason about appropriateness of robot behaviors in the home, we show why deployment of domestic robots require much more thoughtful considerations than implementation of simplified social rules; judgments of what is appropriate depend on context, roles, relationships, and individual boundaries, and can differ between primary and secondary users. We call on Human-Robot Interaction (HRI) practitioners to treat social appropriateness as a fluid, gradient factor at design time rather than a binary concept (appropriate/inappropriate).
Responsible AI measures dataset for ethics evaluation of AI systems
Meaningful governance of any system requires the system to be assessed and monitored effectively. In the domain of Artificial Intelligence (… (voir plus)AI), global efforts have established a set of ethical principles, including fairness, transparency, and privacy upon which AI governance expectations are being built. The computing research community has proposed numerous means of measuring an AI system's normative qualities along these principles. Current reporting of these measures is principle-specific, limited in scope, or otherwise dispersed across publication platforms, hindering the domain's ability to critique its practices. To address this, we introduce the Responsible AI Measures Dataset, consolidating 12,067 data points across 791 evaluation measures covering 11 ethical principles. It is extracted from a corpus of computing literature (n = 257) published between 2011 and 2023. The dataset includes detailed descriptions of each measure, AI system characteristics, and publication metadata. An accompanying, interactive visualization tool supports usability and interpretation of the dataset. The Responsible AI Measures Dataset enables practitioners to explore existing assessment approaches and critically analyze how the computing domain measures normative concepts.
Measuring What Matters: Connecting AI Ethics Evaluations to System Attributes, Hazards, and Harms
Over the past decade, an ecosystem of measures has emerged to evaluate the social and ethical implications of AI systems, largely shaped by … (voir plus)high-level ethics principles. These measures are developed and used in fragmented ways, without adequate attention to how they are situated in AI systems. In this paper, we examine how existing measures used in the computing literature map to AI system components, attributes, hazards, and harms. Our analysis draws on a scoping review resulting in nearly 800 measures corresponding to 11 AI ethics principles. We find that most measures focus on four principles – fairness, transparency, privacy, and trust – and primarily assess model or output system components. Few measures account for interactions across system elements, and only a narrow set of hazards is typically considered for each harm type. Many measures are disconnected from where harm is experienced and lack guidance for setting meaningful thresholds. These patterns reveal how current evaluation practices remain fragmented, measuring in pieces rather than capturing how harms emerge across systems. Framing measures with respect to system attributes, hazards, and harms can strengthen regulatory oversight, support actionable practices in industry, and ground future research in systems-level understanding.
Opening the Scope of Openness in AI
Tamara Paris
Jin L.C. Guo
Roboethics for Everyone – A Hands-On Teaching Module for K-12 and Beyond
In this work, we address the evolving landscape of roboethics, expanding beyond physical safety to encompass broader societal implications. … (voir plus)Recognizing the siloed nature of existing initiatives to teach and inform ethical implications of artificial intelligence (AI) and robotic systems, we present a roboethics teaching module designed for K-12 students and general audiences. The module focuses on the high-level analysis of the interplay between robot behaviour design choices and ethics, using everyday social dilemmas. We delivered the module in a workshop to high school students in Montreal, Canada. From this experience, we observed that the module successfully fostered critical thinking and ethical considerations in students, without requiring advanced technical knowledge. This teaching module holds promise to reach a wider range of populations. We urge the education community to explore similar approaches and engage in interdisciplinary training opportunities regarding the ethical implications of AI and robotics.
From Silos to Systems: Process-Oriented Hazard Analysis for AI Systems
How different mental models of AI-based writing assistants impact writers’ interactions with them
A.R. Olteanu
Q. Vera Liao
Investigating Robot Influence on Human Behaviour By Leveraging Entrainment Effects
Perspectives on Robotic Systems for the Visually Impaired.
Many roboticists hope to build robots and develop technologies that would one day help vulnerable populations to improve their quality of li… (voir plus)fe. As there are over 2.2 billion people with visual impairments in the world, this vulnerable population is a prime target for robotic assistants to help. In a discussion with a Certified Orientation and Mobility Specialist, someone who helps individuals with visual impairments navigate and perform daily tasks effectively, some interesting and counterintuitive questions were raised about technological developments, particularly robots. While these devices were meant to help the BVI population, many are, in reality, not practically beneficial. In this article, we highlight certain misconceptions about the BVI population and their needs. We emphasize the mismatch between robotics research and the needs of the individuals with visual impairments, especially from the lens of HRI researchers.
Socially Assistive Robots for patients with Alzheimer's Disease: A scoping review.
Mark J. Yaffe
Genevieve Gore
S. A. Rahimi