Mila > News > 14 takeaways from Mila’s first international conference on human rights and AI

11 Mar 2024

14 takeaways from Mila’s first international conference on human rights and AI

From February 14 to February 16, 2024, Mila welcomed a wide array of thought leaders and decision-makers from around the world for its first global conference on artificial intelligence (AI) and human rights, Protecting Human Rights in the Age of AI. Together, experts from international organizations, academia and NGOs discussed solutions to better integrate human rights into AI governance and ensure their protection throughout the world.

Through 14 quotes, here are the main takeaways from the conference:

1. AI innovation needs to be more diverse

We already see some applications of AI, from sustainable development to humanitarian response to improving efficiency in terms of allocation and use of resources. But existing business models are not going to deliver on those opportunities, and the existing concentration of tech and economic power is not going to help with that. Six to seven companies are not going to deliver us from our problems. We need more diverse participation in the AI innovation wave.” – Amandeep Singh Gill, Secretary-General’s Envoy on Technology, United Nations

2. Liability should be built into AI systems

Companies should be demonstrating to the public or its representatives that their systems are not going to harm people before they even deploy them, or before they even build them.” – Yoshua Bengio, Founder and Scientific Director, Mila

Yoshua Bengio, Amandeep Singh Gill and Alondra Nelson during the opening panel.

3. AI governance should go beyond existing frameworks

The ‘AI for good’ umbrella lacks specificity… Human rights law, the Universal Declaration of Human Rights are deeply relevant for much of our work, but it’s going to take us to operationalize these principles in the global governance of AI. ” – Alondra Nelson, Harold F. Linder Professor, Institute for Advanced Study

4. AI companies are part of the solution

While states remain the main duty bearer of human rights protection, there is a growing recognition that businesses should also respect them. This is essential, as with the fading centrality of states and the integration of new actors in international order, the private sector becomes a crucial and increasingly powerful player that needs to be part of the solution.” – Catherine Régis, Full Professor, Université de Montréal, Canada CIFAR AI Chair, Associate Academic Member, Mila

5. Legislating on AI is the way, not the end goal

As lawyers, we should not expect that everything can be solved with a beautiful piece of legislation: the work does not end with just a list to check. It goes back to finding a middle ground between developing tools and methodologies in order to operationalize without being tempted to then think that the work is over.” – Nathalie Smuha, Assistant Professor of Law, KU Leuven

6. AI regulation is having a global moment

There is a converging view that governing AI better is part of the agenda. It’s not about regulating AI or not, it’s about what tools we have to ensure that the development of these technologies deliver better impacts and that we control the downside risk in terms of bias, or discrimination, or abuses” – Gabriela Ramos, Assistant Director-General for the Social and Human Sciences, UNESCO

7. Enforcement of human rights breaches by AI is key

Having human rights is critical but enforcing them is also critical. In some cases it’s not easy to obtain redress for when your rights have been infringed upon unless you’re very rich and can hire law firms to represent you.” – Karine Perset, Head of OECD.AI Policy Observatory, AiGO and its Network of Experts

8. Standards on human rights can guide AI developers

We need to have standards which help organizations take into account human rights assessments and considerations, because computer scientists are not trained to do so. Standards are methodologies that help developers to engage with stakeholders and take these issues into account.” – Clara Neppel, Senior Director – European Operations, IEEE

9. More diverse voices need to be heard on AI

One of the problems today in AI governance is that mostly, civil society and marginalized groups are not represented at the table. To be represented, they need resources, they need support, training, and they need to be invited” – Wanda Muñoz, Senior Consultant, Inclusion, Gender Equality and Humanitarian Disarmament, Member of the Feminist AI Research Network

10. AI governance benefits from cross-cultural perspectives

As we’re determining regulation to govern AI, we need to think from a pro-poor, subaltern perspective. Who has been the most affected by these technologies and how do we protect them? Until we remove ourselves from a pro-Western, individualistic perspective, we will never get it right.” – Jake Okechukwu Effoduh, Assistant Professor of Law, Toronto Metropolitan University

11. Awareness is crucial to protect human rights

The vast majority of institutions have no idea where in their country AI is being used, and which human rights are most affected. Depending on the region, it’s also not the most pressing issue to focus on AI and on discrimination. Monitoring human rights infringements is a pressing need, and we should focus more on monitoring and awareness raising amongst the entire population.” – Nele Roekens, Legal advisor at Unia, the Belgian National Human Rights Institution and Equality Body

12. Multidisciplinarity is key for properly govern AI

We cannot separate AI technology from the environment in which it is created, used, and governed. There are no technology fixes for all the technology consequences… We need to look at the issues from a much broader, a much more multidisciplinary perspective.” – Virginia Dignum, Full Professor of Responsible Artificial Intelligence, University of Umeå, Sweden

From left to right : Karine Perset, Patrick Penninckx, Neema Lugangira and Benjamin Prud’homme

13. Better global AI governance would ensure no country gets left behind

“If you put legislators out of the equation, they are going to create laws which may or may not be implementable, and people may turn around and start pointing fingers at them. There is a knowledge gap in AI between global North and global South, and we have to make sure that we capacitate legislators.” – Neema Lugangira, Member of Parliament (CCM) representing NGOs Tanzania Mainland

14. We are at a pivotal time in AI governance

What is missing right now is the consensus, not about the human rights, democracy or rule of law principles that need to be defended, but institutional differences. And we need to be able to overcome them to really focus on those core dimensions that we all agree upon in order to be able to implement them” – Patrick Penninckx, Head of department – Information Society at the Council of Europe