AI offers great opportunities but also carries risks that should be taken into account and tackled at all stages of research, development and deployment.
We believe that scientific progress should benefit everyone. To this end, we created the AI for Humanity team to ensure that technology serves the best interests of all humans. Our growing team of specialists in AI ethics and governance engage with multidisciplinary academics, civil society, industry partners and governments to ensure a socially beneficial and responsible development of AI.
In 2018, Mila pioneered the responsible AI movement by creating the Montreal Declaration for a Responsible Development of Artificial Intelligence in partnership with the University of Montreal. It was supported by most of Quebec’s AI ecosystem and proposed ethical principles based on 10 fundamental values: well-being, respect for autonomy, protection of privacy and intimacy, solidarity, democratic participation, equity, diversity inclusion, prudence, responsibility and sustainable development. These values continue to be at the heart of everything we do, and guide us on the path to a better development of AI.
TRAIL (Trustworthy and Responsible AI Learning) program
Ethics, legal and governance considerations are mostly absent from the formal education of AI researchers and industry professionals. Our TRAIL programs (TRAIL Research and TRAIL Industry) teaches participants how to best evaluate and address the downstream impact of AI systems by introducing current practices and tools for their responsible development. The program aims to build knowledge and strengthen skills among researchers, industry actors and policymakers.
Summer School in Responsible AI & Human Rights
AI education and research initiatives often lack an interdisciplinary and international approach. Our Summer School, a joint initiative by Mila and University of Montreal, brings together participants from various backgrounds and countries to learn about socially beneficial and responsible development of AI, human rights and AI governance. Over the course of a week, participants gain actionable knowledge, professional network and leadership skills to become advocates for responsible AI and human rights in their own communities and beyond.
“Keeping Up With AI” speaker series for the Canada School of Public Service
Public servants and policymakers need to better understand AI technology to know how to best regulate it and how to use it to better serve citizens. Our Mila professors and staff members introduce the most interesting and promising applications of AI to a public service audience while highlighting the risks associated with them. We believe that this will lead to a better public understanding of the benefits and the pitfalls of a rapidly evolving technology among public servants.
Women+ (women, transgender, and non-binary individuals) are underrepresented at all levels in the AI industry, particularly in leadership roles. The AI4Good Lab is an introductory program to machine learning for women+ with some programming background and intermediate to no experience in machine learning. Our Lab trainees have the potential to go on as researchers, as developers, as leaders and policy makers, and the program introduces them to the technical skills, pressing topics, and most importantly, the contacts that they need to succeed in the AI industry.
Responsible AI consulting for industry
Smaller companies need better access to AI tools to keep up with their bigger competitors. Our interdisciplinary team is supporting small and medium sized organizations (SMEs) with the responsible adoption of AI by providing hands-on and practical advice to machine learning teams within industry in close collaboration with Mila’s Activation program -which helps SMEs to kick-start the adoption of advanced machine learning- and Applied Machine Learning Research team.
DEI and responsible AI playbook
Knowledge about Diversity, Equity and Inclusion (DEI) issues is crucial to ensure a safe and responsible development of AI systems. We want to provide an organized, accessible, and field-specific channel for AI researchers to learn about DEI and responsible AI concepts, and support them to implement practices most relevant to their work. The primary outcome will be an interactive webpage with key information and further resources, and we will also provide personalized consultations.
Research and thought leadership (UNESCO, UN Habitat)
International cooperation is needed to develop effective safeguards and governance for AI. Our team is leading international and interdisciplinary research projects on the responsible application of AI systems and risk mitigation for governments and AI governance. We established partnerships with multilateral institutions like the United Nations Educational, Scientific and Cultural Organization (UNESCO), with whom we published a book on the urgent need to regulate AI, and the The United Nations Human Settlements Programme (UN-Habitat), with whom we published a book on the risks, applications and governance of AI in cities.
Interdisciplinary scoping workshop and research on psychological impacts of AI systems
Psychological harm, especially online, can have dire impacts on mental health. We are exploring the concept of psychological harms, its impacts and its many potential definitions and applications in the context of recent legislative initiatives. We assembled an interdisciplinary group of experts -including Mila AI researchers and professors- to share their insights on the issue and to take stock of existing research in a variety of fields, identify research gaps and outline further research needs.
First Languages AI Reality (FLAIR) Initiative
Nearly 50% of the world’s Indigenous and about 90% of North American languages are endangered, according to UNESCO. We created the First Languages AI Reality (FLAIR) initiative to enable the next chapter in Indigenous language reclamation thanks to the use of advanced immersive AI technology. FLAIR’s goal is to develop a method for rapid creation of custom automatic speech recognition (ASR) models for Indigenous languages.
AI for Humanity’s Applied Projects portfolio contains five high-impact, socially beneficial AI projects. We seek to generate collective impact by sharing our best practices and insights as well as open sourcing our datasets and models. We collaborate with multidisciplinary experts to carefully test and assess the potential impacts of our tools before deploying them.
Few tools exist to detect sexist bias online and prevent its harmful impacts. Biasly is a multidisciplinary research project designed to find, correct and remove misogynistic language in text and to teach people how bias against women can be expressed. In addition to deploying the tool, we seek to publish a novel dataset that we hope will advance the field of study.
Modern Slavery Acts require companies worldwide to publish documents detailing the measures put in place to ensure that there is no slavery in their supply chains, but the reports often lack accessibility. Our project AI against Modern Slavery (AIMS) is creating methods and tools to automatically ‘read’ these statements. Using the latest data science and machine learning techniques, including NLP (natural language processing) and computational linguistics, AIMS seeks to provide a deep dive into each report, assessing the level of compliance and enhancing the impact of anti-slavery regulation.
Lawyers, judges and policy makers worldwide need better infrastructure to identify and understand the trends in AI-related policy and law. Our Databank project is being developed to bridge the gap by providing an interactive search engine that can share key trends in AI regulation at a global scale. Leveraging a repository of data from the Organisation for Economic Co-operation and Development (OECD), Databank will provide users with the information they need to identify blindspots, trends and even opportunities for international cooperation.
Online human trafficking is one of the most intractable challenges of our time. Our Project Infrared, developed in consultation with lawyers, ethicists, criminology experts and survivors of human trafficking, uses data-driven techniques to flag suspicious network activity that exhibits signs of human trafficking. We believe that this could lead to more lives being helped or saved.
Better policies and grassroots change are needed to transition from resource-intensive, extractive farming models to more sustainable models that respect nature, local knowledge and help economies thrive. Data-driven Insights for Sustainable Agriculture (DISA) is an interdisciplinary project that promotes an alternative approach to farming called regenerative agriculture, which is beneficial for carbon sequestration and reductions in soil erosion and water pollution. By combining high-resolution satellite imagery with algorithms, it can identify regenerative and non-regenerative agricultural practices at scale, which demonstrates the positive impact of this approach on soil fertility, soil erosion and resilience of farms to extreme weather events.