Portrait de AJung Moon

AJung Moon

Membre académique principal
Professeure agrégée, McGill University, Département de génie électrique et informatique
Sujets de recherche
Équité
Éthique de l'IA
Éthique des robots
IA centrée sur l'humain
Interaction humain-IA
Interaction humain-machine (IHM)
Interaction humain-robot
Robotique
Sécurité de l'IA

Biographie

AJung Moon est une roboticienne expérimentale. Elle étudie comment les robots et les systèmes d'intelligence artificielle influencent la façon dont les gens se déplacent, se comportent et prennent des décisions, afin de déterminer comment nous pouvons concevoir et déployer de tels systèmes intelligents autonomes de manière plus responsable.

À l'Université McGill, elle est directrice du laboratoire McGill Responsible Autonomy & Intelligent System Ethics (RAISE). Le laboratoire RAISE est un groupe interdisciplinaire qui étudie les implications sociales et éthiques des robots et des systèmes d'intelligence artificielle, et explore ce que cela signifie pour les ingénieurs de concevoir et de déployer ces systèmes de manière responsable pour un avenir technologique meilleur.

Étudiants actuels

Doctorat - McGill
Doctorat - McGill
Maîtrise recherche - McGill
Doctorat - McGill
Doctorat - McGill
Superviseur⋅e principal⋅e :
Doctorat - McGill

Publications

Sociotechnical Harms of Algorithmic Systems: Scoping a Taxonomy for Harm Reduction
Renee Shelby
Kathryn Henne
Paul Nicholas
N'Mah Yilla-Akbari
Jess Gallegos
Andrew J Smart
Emilio Garcia
Gurleen Virk
What does it mean to be a responsible AI practitioner: An ontology of roles and skills
With the growing need to regulate AI systems across a wide variety of application domains, a new set of occupations has emerged in the indus… (voir plus)try. The so-called responsible Artificial Intelligence (AI) practitioners or AI ethicists are generally tasked with interpreting and operationalizing best practices for ethical and safe design of AI systems. Due to the nascent nature of these roles, however, it is unclear to future employers and aspiring AI ethicists what specific function these roles serve and what skills are necessary to serve the functions. Without clarity on these, we cannot train future AI ethicists with meaningful learning objectives. In this work, we examine what responsible AI practitioners do in the industry and what skills they employ on the job. We propose an ontology of existing roles alongside skills and competencies that serve each role. We created this ontology by examining the job postings for such roles over a two-year period (2020-2022) and conducting expert interviews with fourteen individuals who currently hold such a role in the industry. Our ontology contributes to business leaders looking to build responsible AI teams and provides educators with a set of competencies that an AI ethics curriculum can prioritize.
The potential for co-operatives to mitigate AI ethics catastrophes: perspectives from media analysis
Would the world have seen less AI-related scandals if more AI companies operated as co-operatives? As a response to multiple high profile te… (voir plus)ch scandals within the last decade, there has been an increased call for introducing more accountability in the AI industry. However, it is unclear to what degree the proposed efforts have been or will be effective in practice. The question remains whether these incremental, multi-stakeholder AI ethics efforts are in fact trying to address a fundamentally systemic issue inherent to the existing corporate power structure. As an attempt to address this question, we gain an understanding of the major themes in high profile AI-related catastrophes in the last four years (2018–2021) through an inductive media analysis. We then investigate how the principle of democratic gov-ernance and distributive executive power - core to co-operative organization structure - could have prevented or mitigated the contributing factors of the reported events. We find that the vast majority (71%) of the recent AI ethics scandals are not the result of a lack of knowledge or tools, but attributed to power dynamics that hinder the ability of internal stakeholders from taking action. We present the co-operative governance structure as a possible mitigating solution to addressing future AI ethics catastrophes, and provide a critical look at practical challenges inherent to AI co-operatives.
From Plane Crashes to Algorithmic Harm: Applicability of Safety Engineering Frameworks for Responsible ML
Renee Shelby
Andrew J Smart
Edgar Jatho
Joshua A. Kroll
Roboethics as a Design Challenge: Lessons Learned from the Roboethics to Design and Development Competition.
Cheng Lin
Alexander Werner
Brandon J. DeHart
Vivian Qiang
How do we make concrete progress towards de-signing robots that can navigate ethically sensitive contexts? Almost two decades after the word… (voir plus) ‘roboethics’ was coined, translating interdisciplinary roboethics discussions into techni-cal design still remains a daunting task. This paper describes our first attempt at addressing these challenges through a roboethics-themed design competition. The design competition setting allowed us to (a) formulate ethical considerations as an engineering design task that anyone with basic programming skills can tackle; and (b) develop a prototype evaluation scheme that incorporates diverse normative perspectives of multiple stakeholders. The initial implementation of the competition was held online at the RO-MAN 2021 conference. The competition task involved programming a simulated mobile robot (TIAGo) that delivers items for individuals in the home environment, where many of these tasks involve ethically sensitive con-texts (e.g., an underage family member asks for an alcoholic drink). This paper outlines our experiences implementing the competition and the lessons we learned. We highlight design competitions as a promising mechanism to enable a new wave of roboethics research equipped with technical design solutions.
Sociotechnical Harms: Scoping a Taxonomy for Harm Reduction
Renee Shelby
Kathryn Henne
Paul Nicholas
N'mah Fodiatu Yilla
Jess Gallegos
Andrew J Smart
Emilio Garcia
Gurleen Virk
The Role of Robotics in Achieving the United Nations Sustainable Development Goals - The Experts' Meeting at the 2021 IEEE/RSJ IROS Workshop [Industry Activities].
Vincent Mai
Bram Vanderborght
Tamás Haidegger
Alaa M. Khamis
Niraj Bhargava
Dominik B. O. Boesl
Katleen Gabriels
An Jacobs
Robin R. Murphy
Yasushi Nakauchi
Edson Prestes
Bram Vanderborght
Ricardo Vinuesa
Carl-Maria Mörch
What does it mean to be an AI Ethicist: An ontology of existing roles
With the increasing adoption of Artificial Intelligence systems (AIS) in various application and the growing efforts to regulate such system… (voir plus)s, a new set of occupations has emerged in the industry. This new set of roles take different titles and hold varying responsibilities. However, the individuals in these roles are tasked with interpreting and operationalizing best practices for developing ethical and safe AI systems. We will broadly refer to this new set of occupations as AI ethicists and recognize that they often hold a specific role in the intersection of technology development, business needs, and societal implications. In this work, we examine what it means to be an AI ethicist in the industry and propose an ontology of existing roles under this broad title along with their required competencies. We create this ontology by examining the job postings for such roles over the past two years and conduct expert interviews with fourteen individuals who currently hold such a role in the industry. The proposed ontology will inform executives and leaders who are looking to build responsible AI teams and provide educators the necessary information for creating new learning objectives and curriculum.
How do AI systems fail socially?: an engineering risk analysis approach
Failure Mode and Effect Analysis (FMEA) has been used as an engineering risk assessment tool since 1949. FMEAs are effective in preemptively… (voir plus) identifying and addressing how a device or process might fail in operation and are often used in the design of high-risk technology applications such as military, automotive industry and medical devices. In this work, we explore whether FMEAs can serve as a risk assessment tool for machine learning practitioners, especially in deploying systems for high-risk applications (e.g. algorithms for recidivism assessment). In particular, we discuss how FMEAs can be used to identify social and ethical failures of Artificial Intelligent Systemss (AISs), recognizing that FMEAs have the potential to uncover a broader range of failures. We first propose a process for developing a Social FMEAs (So-FMEAs) by building on the existing FMEAs framework and a recently published definition of Social Failure Modes by Millar. We then demonstrate a simple proof-of-concept, So-FMEAs for the COMPAS algorithm, a risk assessment tool used by judges to make recidivism-related decisions for convicted individuals. Through this preliminary investigation, we illustrate how a traditional engineering risk management tool could be adapted for analyzing social and ethical failures of AIS. Engineers and designers of AISs can use this new approach to improve their system's design and perform due diligence with respect to potential ethical and social failures.
Design of Hesitation Gestures for Nonverbal Human-Robot Negotiation of Conflicts
Maneezhay Hashmi
H. F. Machiel Van Der Loos
Elizabeth A. Croft
Aude Billard
When the question of who should get access to a communal resource first is uncertain, people often negotiate via nonverbal communication to … (voir plus)resolve the conflict. What should a robot be programmed to do when such conflicts arise in Human-Robot Interaction? The answer to this question varies depending on the context of the situation. Learning from how humans use hesitation gestures to negotiate a solution in such conflict situations, we present a human-inspired design of nonverbal hesitation gestures that can be used for Human-Robot Negotiation. We extracted characteristic features of such negotiative hesitations humans use, and subsequently designed a trajectory generator (Negotiative Hesitation Generator) that can re-create the features in robot responses to conflicts. Our human-subjects experiment demonstrates the efficacy of the designed robot behaviour against non-negotiative stopping behaviour of a robot. With positive results from our human-robot interaction experiment, we provide a validated trajectory generator with which one can explore the dynamics of human-robot nonverbal negotiation of resource conflicts.
Ethics of Corporeal, Co-present Robots as Agents of Influence: a Review
H. V. D. Van der Loos
Can Open Source Licenses Help Regulate Lethal Autonomous Weapons?
Cheng Lin
Lethal autonomous weapon systems (LAWS, ethal autonomous weapon also known as killer robots) are a real and emerging technology that have th… (voir plus)e potential to radically transform warfare. Because of the myriad of moral, legal, privacy, and security risks the technology introduces, many scholars and advocates have called for a ban on the development, production, and use of fully autonomous weapons [1], [2].