In early June, 2023, 39 participants from 20 countries gathered at Mila – Quebec AI Institute in Montreal to attend the very first Summer School in Responsible Artificial Intelligence (AI) and Human Rights.
During the week-long event, hosted in collaboration with the University of Montreal, they met with interdisciplinary experts and explored how to responsibly design and deploy AI systems through themes such as responsibility, transparency, ethics, law and governance.
They also attended skills development workshops and designed responsible AI projects based on challenging real-world scenarios.
Most of all, they collaborated and bonded with peers from all over the world and a wide range of backgrounds including master and doctoral students, early-career researchers and professionals from the public, private or non-profit sectors.
Catherine Régis, Mila researcher, Full professor of Law at the University of Montreal and scientific director of the Summer School, thought of it as a way to better integrate human rights concepts and principles into the conversation around responsible AI at the national and international level.
“It was so exciting to see people from all around the world and from different backgrounds like computer science, law and ethics coming together around a common agenda of responsible AI and human rights,” she said.
“Participants were really happy and it’s a good sign that we need to continue to build a community with this very large network of people who will have an impact on their field and their work afterwards. I’m really looking forward to the next edition of the Summer School.”
Benjamin Prud’homme, Executive Director of the AI for Humanity team at Mila, said that the Summer School was a good opportunity to bridge interdisciplinary gaps and contribute to the global discussion around responsible AI.
“We are at an exceptional moment in the conversation about the development and deployment of AI where almost every country in the world –at the local, national and international levels– is questioning how to approach the opportunities and risks of AI,“ he said.
“We have a strong belief in putting human rights at the center of this conversation, and the Summer School is a way for Mila to ensure that we continue to think about these issues collectively to inform public policy. It was designed to enrich the conversation and to be international and inclusive, so that each of the participants would return to their organizations and be adequately equipped to make responsible AI an integral part of their practices.”
Here are some of the themes explored during the Summer School.
Fair, transparent and accountable AI
The first two days eased participants into the fundamentals of responsible AI and governance frameworks in which researchers, political leaders and civil society will have to navigate in a time of unprecedented technological change.
Virginia Dignum, Professor of Responsible AI at Umeå University, Sweden, kicked off the Summer School by speaking about the need to make AI development more fair, transparent and accountable.
AI systems are artifacts made by people, and users should question who made the designs, what they are meant for, which choices were made and who they represent. No matter the situation, human responsibility should remain at the core of AI solutions.
Development of responsible AI systems thus requires a transparent approach and ethical principles should be taken into account throughout the entire development and deployment pipelines.
Translating principles into action
To bridge the gap between theory and practice, Mila researcher and Professor of ethics and political philosophy at the University of Montreal Marc-Antoine Dilhac, one of the architects of the Montreal Declaration for a Responsible Development of Artificial Intelligence, shared an overview of its principles and how to implement them.
He showed how principles from the Declaration, like moral acceptability, autonomy, solidarity and intimacy could be used to assess whether an AI system is compatible with human flourishing, which should be at the core of a human rights-based approach to AI.
The Montreal Declaration should be a work in progress enriched by collective knowledge, and in that spirit, the Summer School allowed him to share perspectives with a diverse group of participants.
“We need to reach out to citizens more in order for them to better take advantage of the tool,” he said.
Professor Catherine Régis then discussed the importance of bringing the human rights framework at the center of AI discussions at the local and international level.
She mentioned that such a framework, which has been agreed by states around the world, provides a rare common ground in AI governance.
But the intersection between human rights and AI needs to be better articulated.
To translate human rights into concrete actions in this area, she mentioned that Human Rights Impact Assessments (HRIA) can play an important role in helping governments and businesses to mitigate upstream the risks AI systems can pose on such rights before deploying the technology.
Towards a better AI governance
The next day, Nicolas Miailhe, Founder and President of The Future Society (an independent ‘think-and-do tank’ with a mission to help operationalize the governance of AI through institutional innovation), explored the opportunities and challenges of AI governance.
With the shift in the public discourse following the rise of generative tools like ChatGPT, efforts should now be focused on developing legal backstops to incentivize corporations to develop more responsible AI systems, as the current arms race between competing corporate giants does not provide the right incentives to develop robust models and instead leads to bigger and increasingly unintelligible ones.
Hence the need for a better global AI governance, which could come from a global dialogue between countries. But transnational cooperation is made difficult by differing national interests, and because AI is a moving target amid slow institutional changes.
“Institutional innovation and capacity building cannot go one without the other… These are the two legs of AI governance,” Nicolas Miailhe concluded.
Lofred Madzou, director of strategy at TruEra (a platform to explain, test, debug and monitor machine learning models) joined the stage to discuss how to better evaluate and audit AI systems.
He emphasized the need to build public awareness on AI to be able to better govern it, and sees the concentration of knowledge in too few hands as the biggest governance threat today.
Fostering ethics in AI
On the third day, Mila researcher and Assistant Professor at the Department of Electrical and Computer Engineering at McGill University AJung Moon –who specializes in robotics– focused on why integrating ethical perspectives is essential as the increasing use of AI comes with a slew of moral dilemmas that can’t easily be solved without adequate tools to tackle them.
Ethics about machines making automated or autonomous decisions differs from traditional ethical approaches for physical tools or technologies, like a plane, because of the far-reaching consequences AI systems can have on people’s lives.
Examples from the use of autonomous vehicles and lethal autonomous weapon systems illustrate that ethical questions and moral dilemmas arising from the design and use of AI systems never have a simple and absolute answer.
Beyond the main western theories of ethics like consequentialism, utilitarianism, deontology and virtue, real life dilemmas are always more complicated and necessitate taking a step back and assessing the broader context in order to deal with them.
AJung Moon said she was hopeful because of the increasing interest of the AI community in ethics and that the Summer School was a great opportunity to meet a diverse set of participants.
“The fact that people are willing to spend their full week here, talking about this particular topic shows that people are voting with their time, which is very valuable.”
Increasing safety of AI systems
Shalaleh Rismani, Mila researcher and PhD candidate at McGill University, then explored toolkits and frameworks to audit algorithmic systems and how to integrate ethical reasoning throughout the AI lifecycle.
When it comes to AI, traditional definitions of safety critical harms such as loss of life, significant property damage or damage to the environment are often not enough as some real-life harms, like manipulation of users, can slip under the radar.
Potential impacts of AI systems can be assessed early on in a more granular way through interdisciplinary dialogue by taking into account considerations like economic loss, alienation, stereotyping, information harms, and privacy violations.
Safety engineering principles and methods from older industries, like aviation or construction, could serve as a guide on the path to designing safer and more trustworthy AI systems.
The concept of responsible and ethical AI may remain vague today –like concepts of building safety a century ago– but over time, actors in the field should build a culture that values safety and responsible development.
International regulation and cooperation
The next day, Nathalie Smuha, legal scholar and philosopher at the KU Leuven Faculty of Law, took the stage to expose the legal challenges of regulating AI by using the European Union’s example.
There is no single definition of AI because it is an umbrella term covering many different sectors and use cases, but lawmakers at the local, national and international level need to define the scope of what will or won’t be regulated, and how to do it.
They will inevitably face tough choices, as even not making choices is one.
There is currently a proliferation of national AI strategies from China, the European Union and the United States that are all competing to impose their definition of what” good” AI is.
But no single approach is perfect; this is why interdisciplinary and international discussions are crucial to inform regulatory decisions.
As regulation efforts move forward, democratic dialogue will be key to make sure all voices are heard, she concluded.
Nathalie Smuha was then joined by Mark Schaan, Senior Assistant Deputy Minister for Strategy & Innovation Policy at the Canadian Department of Innovation, Science and Economic Development (ISED), to discuss and compare regulatory approaches on both sides of the Atlantic.
They shared views on how to avoid a race to the bottom in terms of regulation at a time of heightened competition, and the need to coordinate and establish standards at a global scale.
They also discussed the need to find a balance between ensuring that AI power is not concentrated in too few hands while avoiding that technology actors flee to more lenient jurisdictions.
Schaan emphasized the importance of bringing more people around the table to enrich our collective knowledge of AI and build appropriate tools and frameworks to regulate it.
He said that gathering people from all backgrounds and countries at an event like the Summer School is part of that collaborative process.
“It really is that connective piece: we need to be making linkages and understanding these issues broadly, we need to be really engaging with a much wider swath, so that’s why events like these are really important,” he said.
Broadening perspectives on AI and human rights
The Summer School was dedicated to widening horizons, fostering dialogue and sharing diverse perspectives from all parts of the world, and it ended with a day full of interactive panels and workshops on how to make AI more fair, equitable and inclusive.
Participants first attended a panel discussion between Catherine Régis, Bernard Duhaime, Professor of international law at University du Québec à Montreal (UQAM) and Karine Gentelet, Professor at the Social Science Department at the Université du Québec en Outaouais, who shared perspectives on how to better integrate public experience and expertise in the discussion around AI.
Then, Gabriela Ramos, Assistant Director-General for the Social and Human Sciences of UNESCO, joined the stage and emphasized the deep relationship between UNESCO and Mila, who jointly published the book Missing links in AI governance.
She presented elements from UNESCO’s Recommendation on the Ethics of Artificial Intelligence that could act as a guide as we move towards a more inclusive and human-centric approach to AI.
No perfect institutional model exists because the governance of AI is fractured between countries and sectors, which justifies the presence of an overarching institutional setting to make sure we protect human rights.
Lessons from regulation of the financial sector in the aftermath of the 2008 financial crisis could serve as a guide to avoid making similar mistakes when it comes to regulating AI.
The current laissez-faire approach to AI regulation should be rebalanced by public policy choices that ensure the protection of all humans from potential harms while ensuring that everyone can reap benefits from the use of AI.
Inclusion of more citizens –especially women– in consultations on AI is therefore crucial to gather more diverse perspectives on AI governance.
Participants then attended a panel discussion on the future of AI at the crossroads of various perspectives from within the world of academia and the industry with Maria Axente, Responsible AI and AI for Good Lead at PwC United Kingdom, Shingai Manjengwa, Director of Technical Education at Vector Institute and CEO of Fireside Analytics Inc. and Blake Richards, Mila researcher and Associate Professor in the School of Computer Science and Montreal Neurological Institute at McGill University.
Participants ended the day by presenting responsible AI projects they developed throughout the week in front of a jury that provided them with feedback and advice.
It was then time for goodbyes, but participants promised to keep in touch and to organize regular meet-ups in the future.