TRAIL Research

The Responsible AI Learning Journey for Machine Learning Researchers

Logo Trail research

Description

As the architects of tomorrow’s technological landscape, ML researchers bear a critical responsibility in shaping our shared future. It is essential to equip researchers not only with the technical knowledge necessary to advance the frontiers of artificial intelligence, but also to help them develop an acute awareness of the broader societal implications of this technology, and help them assess the downstream impacts of the AI systems they develop.

Designed at Mila for our research community, the Trustworthy and Responsible AI Learning (TRAIL) certificate is a 12-hour synchronous training program for ML researchers. It provides participants with the foundational knowledge, skills, and tools necessary to responsibly design and conduct their AI research projects. Attendees learn to apply best practices and methodologies throughout the research cycle, from identifying ethical concerns to assessing impacts and investigating and mitigating unintended consequences.

Who Is It For?

This program is currently offered to Mila-affiliated students and researchers only.

However,  if you are an educator interested in offering TRAIL to your students, our comprehensive Open Source Guide for educators will soon be available. With this free tool, you will have the content and facilitation notes required to offer the program to your students.

Open Source Guide for Educators: Interest Form 

To receive the Guide as soon as it becomes available, please fill in this form.

For any additional information, you may contact ariana.seferiades@mila.quebec.

Learning Objectives

Recognizing the challenges researchers face in operationalizing responsible AI, TRAIL was created to equip Mila’s diverse pool of researchers (working on fundamental ML research to applied ML) with the essential tools and questions to practically implement RAI in their research projects. 

Leveraging the vast knowledge of Mila-affiliated professors who are making significant contributions in the fields of FATE (Fairness, Accountability, Transparency, Ethics), human-AI interaction and responsible AI, this program was developed with the support and guidance of expert educators and researchers. 

After this program, participants will be able to:

  • Define what responsible AI and AI ethics are, and how it applies to the context of research;
  • Develop ethical sensitivity and critical thinking skills to conduct responsible AI research;
  • Learn to assess the downstream impacts of AI research projects and to make conscious design choices, through practical tools, frameworks and case studies;
  • Learn how to adopt the AI research cycle as a strategic framework to consciously plan and execute responsible AI research projects;
  • Making informed design decisions from the early stages of a project;
  • Writing impact statements;
  • Discover and apply socio-technical tools and best practices mitigate potential risks of AI systems.
photo Jackie Cheung

This program exposes researchers to important concepts, ideas, methods and practices for pursuing AI research that is sensitive to downstream impacts and use cases. It helps to bring discussions around values and goals to machine learning.

Professor Jackie Cheung, Core Academic Member, Mila

93%

Satisfaction 

93% of the participants would recommend the program to peer (100% on the last two cohorts)

120 

Certifications

120 certified ML researchers since 2023

74% 

Impact

74% of participants shared that the program significantly influenced their approach to ML research

Content Overview

Module 1 | Introduction to Responsible AI and AI Ethics

Understand the scope and components of responsible AI and how it applies to AI research. Recognize and address moral dilemmas. Develop ethical sensitivity, reflective and dialogic skills.

Module 2 | Responsible AI Project Planning and Mitigation

Bridge the gap between theory and practice by learning practical tools and frameworks for identifying and mitigating risks throughout the AI lifecycle. 

Module 3 | Integrating Learning and Reflecting on the Road Ahead 

Practice impact assessments and responsible design. Build accountability and self-reflection.

Past Intructors

Affiliate Member
Portrait of Fernando Diaz is unavailable
Associate Professor, Carnegie Mellon University, School of Computer Science, Language Technologies Institutes
Core Academic Member
Portrait of Golnoosh Farnadi
Assistant Professor, McGill University, School of Computer Science
Canada CIFAR AI Chair
Portrait of Rose Landry
Project Manager, AI Governance, Legal Affairs and AI Governance
Portrait of Maryam Molamohammadi
Advisor, Responsible AI, Applied Projects
Core Academic Member
Portrait of AJung Moon
Assistant Professor, McGill University, Department of Electrical and Computer Engineering
Portrait of Shalaleh Rismani is unavailable
PhD - McGill University
Core Academic Member
Portrait of David Rolnick
Assistant Professor, McGill University, School of Computer Science
Canada CIFAR AI Chair

TRAIL covered a lot of topics surrounding responsible AI while being super fun and engaging! I especially loved the group workshop activities where we got to not only put into practice the concepts/tools we learned about, but also to socialize with people [...] Overall, it was an amazing experience that I recommend to all AI researchers [...].

Yu Lu Liu, MSc. Student, Mila/McGill University

Contact

If you have questions about the program, please contact Ariana Seferiades Prece, Project Manager, Learning at Mila. 

Mila building from outside