AI for Everyone? A Roadmap to Substantive Equality in AI Ecosystems

a woman in the street taking part of a strike

In the whirlwind of artificial intelligence’s development and deployment, one fact has made itself clear: AI systems tend to mirror the world we already live in, magnifying existing societal inequalities and deepening the historical marginalisation of certain groups. 

You may have heard the aphorism that AI is only as good as the data on which it’s trained — and of course, that data is drawn from real life, from the flawed but ever-progressing world we inhabit. The same is true for the design, development, and governance of AI, where historic gaps and harms persist. Every phase of AI systems’ life cycles, if not approached with a deliberate focus on equality, could exacerbate existing disparities, both within countries and between countries. 

Although many would recognize that everyone should benefit from AI development, action toward greater equality in AI ecosystems is one of the areas of AI governance that receive the least investment. There is a glaring need to act and to establish a strong global framework to enable policymakers to achieve gender equality and diversity in AI. 

 

Insights and recommendations for transformative AI policy

In order to address the root causes of AI inequalities, we need to take steps to reverse historical exclusion of individuals and communities. With this goal in mind, the Global Partnership on AI (GPAI) Responsible AI Working Group, supported by Mila - Quebec Artificial Intelligence Institute and CEIMIA, has produced a report, "Towards Substantive Equality in Artificial Intelligence: Transformative AI Policy for Gender Equality and Diversity," and an accompanying policy guide to help policymakers implement their recommendations.

Based on extensive consultations with more than 200 participants from over 50 countries and a diverse array of communities, identities, and fields of expertise, the report and policy guide call for real equality in AI. Together, they provide policy insights, examples of promising practices, and actionable recommendations, as well as a step-by-step roadmap of how to implement these recommendations. All of these are based on a human-rights-based framework with a focus on gender equality and diversity. 

Let’s take a bird’s eye view of some of the report and policy guide’s key recommendations. 

 

Levelling the playing field: Inclusive design and democratic innovation

When thinking about inclusive development of AI, we need a paradigm shift to move beyond simply "adding" marginalised groups to AI discussions. Instead, we should decenter AI itself and centre communities through inclusive design and democratic innovation practices in order to directly address the systemic disadvantages faced by women and other marginalised groups to participate in the development of the technologies they need. 

There are tangible ways in which policymakers can contribute to this process of meaningful inclusion — whether that means investing in capacity building for institutions, allowing the processing of special data categories, or funding transformative technology research and design.

One illustration of inclusive design in action is the Feminist AI Research Network (f<A+i>r), an initiative of nearly 100 feminist AI academics, activists and practitioners from different fields championing multidisciplinary knowledge sharing and feminist innovation worldwide. It’s both a community-driven approach and a technologically advanced, innovative one. The network provides new data, algorithms, models, policies and systems that can be used to correct for real-life harm and barriers to women’s and other marginalised groups’ rights, representation and equality. 

From a policy perspective, it is crucial to fund and support such initiatives and, by extension, the development and implementation of inclusive AI systems and processes. We must create the conditions for traditionally excluded people and communities to meaningfully participate as central actors in AI. Marginalised communities’ active, intentional involvement in every phase of AI development, deployment and governance is essential to ensure that AI is beneficial for all.

 

Who gets a say? Meaningful participation in AI governance

Simply having access to AI doesn’t necessarily mean that women and other marginalised groups get to participate in shaping its development and governance. A human rights-centred approach to AI demands that people have a real say in the development and deployment of technologies that can deeply impact their lives. Decisions about AI impact everyone, so those decisions must reflect the values and priorities of all communities, particularly those historically excluded. This requires active public engagement, capacity-building for marginalised groups, and legal protections for public participation rights and collective data rights.

Historically, systemic barriers have limited marginalised communities’ career growth, skill development, and community awareness, not only of AI but of other emerging technologies before it. This is a consequence of economic inequalities, limited access to training opportunities, or inadequate digital infrastructure, and cultural barriers. Thankfully, many initiatives are working to remove these barriers to democratic innovation and meaningful participation in AI governance.

One such initiative, the Indigenous Pathfinders in AI program, led by Mila – Québec AI Institute in partnership with Indspire, is a career pathway initiative that creates the conditions for Indigenous talent to shape the future of AI. Centred around Indigenous worldviews and values in AI development, the program enables Indigenous communities to drive AI development in ways that benefit them. 

The learnings from this initiative are important to understand how Indigenous communities and cultures from around the world can uniquely contribute to developing AI technologies that reflect their worldviews. Policymakers can work towards more inclusive AI governance by holding awareness and consultation sessions with marginalised groups to understand their unique priorities and needs and develop policies to address them. Equipped with these perspectives, they can then fund and support educational, professional, and financial initiatives which allow marginalised communities to meaningfully participate and lead within AI ecosystems and develop their own AI technologies.

 

Building trust: Transparency, accountability, and access to justice

In the current AI landscape, there is a critical need for transparency, accountability and access to justice in AI-related processes and decision-making. Ensuring that there are robust frameworks in place to prevent harm and discrimination from AI is crucial. Yet, to establish trust in AI, and to harness its benefits, there must also be mechanisms in place for when such frameworks fail. If communities are excluded, harmed or discriminated against by AI systems — whether incidentally or intentionally — they must have legal resources, and means of taking action to redress harm. 

Transparency and accountability contribute to substantive equality by allowing us to audit the systems in place. If we can publicly scrutinise AI systems and processes, detect biases, and hold private and public providers accountable for harmful impacts, we can have far better visibility on the issues at hand. That, in turn, will allow us to advocate for equality and correct structural exclusion in an informed, cohesive way. The report and policy guide outline several recommendations to improve transparency and accountability. These include guaranteeing the right to information, enhancing algorithmic transparency, conducting human rights impacts assessments, and establishing public procurement guidelines across the AI lifecycle. 

A practical example we can learn from is the Global Index on Responsible AI, an initiative that analyses countries’ AI policies and national commitments to promoting inclusivity, accountability, and ethics in AI. The comprehensive, collaborative tool and its benchmarks then provide representative data to policymakers, researchers and journalists, allowing them to track and measure countries’ progress in defending human rights in AI. In turn, policymakers everywhere can use these benchmarks to design regulations that mandate transparency in AI operations and outline accountability mechanisms to ensure a responsible, inclusive use of AI. Once we have the data and we’ve turned them into insights, it’s time to move to action.

 

A roadmap towards substantive equality in AI ecosystems

To ensure that AI benefits everyone, AI systems must be designed inclusively, governed democratically, and equipped with methods of accountability and justice to redress potential harms. With an impressive array of initiatives championing these causes, it is clear that governments have the responsibility to develop transformative AI policies to ensure that the AI ecosystem is not reproducing inequalities within and across countries. So now we move to what may be the toughest part of all — we take action. 

Rightfully, many actors may feel unsure of how to proceed with this nascent and rapidly-evolving technology. This is where a step-by-step roadmap — like the one provided in the Policy Guide for Implementing Transformative AI Policy Recommendations — can come in handy. 

Figure 1: A step-by-step roadmap for the implementation of transformative AI policy

 

Putting these policy insights into practice and integrating them into global regulations and legislation will require much coordination. Institutions around the world will need to invest in these areas to allow just AI. That could take a number of forms: investing in their own capacity as well as in capacity-building for others; investing in people, in training, in creating institutions; or outlining the rights and responsibilities institutions have to enable access to justice. Public regulatory sandboxes are also useful tools to enable safe innovation by allowing us to perform iterative testing to balance technological testing with the need for clear regulatory boundaries.

Granted, there is no one-size-fits-all solution for more responsible, equitable and inclusive AI ecosystems. The recommendations in the GPAI report and policy guide are also meant to be adapted and applied in context – geographical, social, cultural, historical, economic, legal, and political. The actions stakeholders can realistically take will vary enormously depending on legal and regulatory frameworks, technical capacities, resource availability and stakeholder co-operation, and other community-specific obstacles to implementation.

To realise this vision, we must support local initiatives while promoting international collaboration. Exchanging with diverse global stakeholders is necessary to develop inclusive policies that can adapt to the rapid pace of AI advancements. Therefore, global actors must seek out and amplify the voices of the global majority, and ensure that they form an integral part of AI governance. By prioritising equity and inclusion, we can harness AI's power to create a more just and equitable future for all.

This article was originally published on the OECD’s AI Wonk Blog.