Portrait de AJung Moon

AJung Moon

Membre académique principal
Professeure adjointe, McGill University, Département de génie électrique et informatique
Sujets de recherche
Équité
Éthique de l'IA
Éthique des robots
IA centrée sur l'humain
Interaction humain-IA
Interaction humain-machine (IHM)
Interaction humain-robot
Robotique
Sécurité de l'IA

Biographie

AJung Moon est une roboticienne expérimentale. Elle étudie comment les robots et les systèmes d'intelligence artificielle influencent la façon dont les gens se déplacent, se comportent et prennent des décisions, afin de déterminer comment nous pouvons concevoir et déployer de tels systèmes intelligents autonomes de manière plus responsable.

À l'Université McGill, elle est directrice du laboratoire McGill Responsible Autonomy & Intelligent System Ethics (RAISE). Le laboratoire RAISE est un groupe interdisciplinaire qui étudie les implications sociales et éthiques des robots et des systèmes d'intelligence artificielle, et explore ce que cela signifie pour les ingénieurs de concevoir et de déployer ces systèmes de manière responsable pour un avenir technologique meilleur.

Étudiants actuels

Maîtrise recherche - McGill
Doctorat - McGill
Maîtrise recherche - McGill
Superviseur⋅e principal⋅e :
Maîtrise recherche - McGill
Stagiaire de recherche - McGill
Collaborateur·rice de recherche - None
Collaborateur·rice de recherche
Maîtrise recherche - McGill
Superviseur⋅e principal⋅e :
Maîtrise recherche - McGill
Co-superviseur⋅e :
Collaborateur·rice de recherche
Collaborateur·rice de recherche - McGill
Stagiaire de recherche - McGill
Doctorat - McGill

Publications

From Silos to Systems: Process-Oriented Hazard Analysis for AI Systems
Shalaleh Rismani
Roel Dobbe
How different mental models of AI-based writing assistants impact writers’ interactions with them
Shalaleh Rismani
Su Lin Blodgett
Q. Vera Liao
Investigating Robot Influence on Human Behaviour By Leveraging Entrainment Effects
Lixiao Zhu
Perspectives on Robotic Systems for the Visually Impaired
Christopher Yee Wong
Rahatul Amin Ananto
Tanaka Akiyama
Joseph Paul Nemargut
Socially Assistive Robots for patients with Alzheimer's Disease: A scoping review.
Vania Karami
Mark J. Yaffe
Genevieve Gore
No such thing as one-size-fits-all in AI ethics frameworks: a comparative case study
Vivian Qiang
Jimin Rhim
Improving Generalization in Reinforcement Learning Training Regimes for Social Robot Navigation
In order for autonomous mobile robots to navigate in human spaces, they must abide by our social norms. Reinforcement learning (RL) has emer… (voir plus)ged as an effective method to train robot sequential decision-making policies that are able to respect these norms. However, a large portion of existing work in the field conducts both RL training and testing in simplistic environments. This limits the generalization potential of these models to unseen environments, and undermines the meaningfulness of their reported results. We propose a method to improve the generalization performance of RL social navigation methods using curriculum learning. By employing multiple environment types and by modeling pedestrians using multiple dynamics models, we are able to progressively diversify and escalate difficulty in training. Our results show that the use of curriculum learning in training can be used to achieve better generalization performance than previous training methods. We also show that results presented in many existing state-of-the art RL social navigation works do not evaluate their methods outside of their training environments, and thus do not reflect their policies' failure to adequately generalize to out-of-distribution scenarios. In response, we validate our training approach on larger and more crowded testing environments than those used in training, allowing for more meaningful measurements of model performance.
Beyond the ML Model: Applying Safety Engineering Frameworks to Text-to-Image Development
Shalaleh Rismani
Renee Shelby
Andrew J Smart
Renelito Delos Santos
Identifying potential social and ethical risks in emerging machine learning (ML) models and their applications remains challenging. In this … (voir plus)work, we applied two well-established safety engineering frameworks (FMEA, STPA) to a case study involving text-to-image models at three stages of the ML product development pipeline: data processing, integration of a T2I model with other models, and use. Results of our analysis demonstrate the safety frameworks – both of which are not designed explicitly examine social and ethical risks – can uncover failure and hazards that pose social and ethical risks. We discovered a broad range of failures and hazards (i.e., functional, social, and ethical) by analyzing interactions (i.e., between different ML models in the product, between the ML product and user, and between development teams) and processes (i.e., preparation of training data or workflows for using an ML service/product). Our findings underscore the value and importance of examining beyond an ML model in examining social and ethical risks, especially when we have minimal information about an ML model.
Sociotechnical Harms of Algorithmic Systems: Scoping a Taxonomy for Harm Reduction
Renee Shelby
Shalaleh Rismani
Kathryn Henne
Paul Nicholas
N'Mah Yilla-Akbari
Jess Gallegos
Andrew J Smart
Emilio Garcia
Gurleen Virk
What does it mean to be a responsible AI practitioner: An ontology of roles and skills
Shalaleh Rismani
With the growing need to regulate AI systems across a wide variety of application domains, a new set of occupations has emerged in the indus… (voir plus)try. The so-called responsible Artificial Intelligence (AI) practitioners or AI ethicists are generally tasked with interpreting and operationalizing best practices for ethical and safe design of AI systems. Due to the nascent nature of these roles, however, it is unclear to future employers and aspiring AI ethicists what specific function these roles serve and what skills are necessary to serve the functions. Without clarity on these, we cannot train future AI ethicists with meaningful learning objectives. In this work, we examine what responsible AI practitioners do in the industry and what skills they employ on the job. We propose an ontology of existing roles alongside skills and competencies that serve each role. We created this ontology by examining the job postings for such roles over a two-year period (2020-2022) and conducting expert interviews with fourteen individuals who currently hold such a role in the industry. Our ontology contributes to business leaders looking to build responsible AI teams and provides educators with a set of competencies that an AI ethics curriculum can prioritize.
The potential for co-operatives to mitigate AI ethics catastrophes: perspectives from media analysis
David Marino
Would the world have seen less AI-related scandals if more AI companies operated as co-operatives? As a response to multiple high profile te… (voir plus)ch scandals within the last decade, there has been an increased call for introducing more accountability in the AI industry. However, it is unclear to what degree the proposed efforts have been or will be effective in practice. The question remains whether these incremental, multi-stakeholder AI ethics efforts are in fact trying to address a fundamentally systemic issue inherent to the existing corporate power structure. As an attempt to address this question, we gain an understanding of the major themes in high profile AI-related catastrophes in the last four years (2018–2021) through an inductive media analysis. We then investigate how the principle of democratic gov-ernance and distributive executive power - core to co-operative organization structure - could have prevented or mitigated the contributing factors of the reported events. We find that the vast majority (71%) of the recent AI ethics scandals are not the result of a lack of knowledge or tools, but attributed to power dynamics that hinder the ability of internal stakeholders from taking action. We present the co-operative governance structure as a possible mitigating solution to addressing future AI ethics catastrophes, and provide a critical look at practical challenges inherent to AI co-operatives.
From Plane Crashes to Algorithmic Harm: Applicability of Safety Engineering Frameworks for Responsible ML
Shalaleh Rismani
Renee Shelby
Andrew J Smart
Edgar Jatho
Joshua A. Kroll