Mila > Biasly

Biasly

An AI-powered antidote to human and computer-generated gender and racial bias

Project Description

Biasly AI deploys a natural language processing algorithm onto online text in order to not only identify conscious and subconscious bias but also to debias problematic sentences.

This tool fills a high priority gap among existing machine learning capabilities as well as satisfies the needs of a number of stakeholder groups. Ultimately, this tool can be leveraged by stakeholders ranging from:

  • Employers in industries that are saturated with a particular gender or racial demographic who are interested in altering the language used in job postings to attract more diverse applicants;
  • Marketing and/or news organizations interested in removing subconscious biases from their campaigns and articles;
  • Creators of internet forums and social media sites for the purpose of content moderation;
  • Researchers studying gender and racial discrimination; and,
  • Internet users who would like to more critically engage with the content they consume online.

About the Project

The Origin

Biasly AI was originally devised by Andrea Jang, Carolyne Pelletier, Ines Moreno and Yasmeen Hitti during their summer training at the AI4Good Lab. Their goal was to build a machine learning tool capable of detecting and correcting gender bias in written text. In order to bring their idea to life, the team decided to undertake a Humanitarian AI Internship with Mila, where they developed a taxonomy for gender bias in text and a gender bias dataset. Although their internships have since come to an end, Mila’s team of researchers have taken on this work in order to realize the vision of Biasly AI.

Taking Biasly AI to a New Level

Biasly AI’s current team has been created with the goal of diversity in mind. The team features individuals from various academic disciplines, genders, ethnicities and countries. The social science workstream is being led by Dr. Jia Xue, founder of the Artificial Intelligence for Justice Lab at the University of Toronto. The technical workstream is being led by Dr. Yoshua Bengio, Scientific Director at Mila, and post-doctoral researcher Dianbo Liu. Our team also includes interns from Cameroon, Nigeria, Uganda and Benin. 

In light of the political and social climate, the current team has decided to enhance the scope of Biasly AI to include not only gender bias but also racial bias as well. In order for this model to do the most good, the issue of racial bias must also be captured and addressed. 

What’s Next

Biasly AI is anticipated to have a final prototype ready by September 30, 2021. Once the prototype is complete, the project will be polished and professionalized by the Applied Research team at Mila. Following the tool’s professionalization, Biasly AI will be deployed and downloadable as a free Chrome extension. In addition to the Chrome extension, Biasly AI will have a number of tools deployed to enable others in the field to build anti-bias AI applications. These tools include an open source dataset on racial and gender bias and a best practice annotation guideline.

We are continuing to seek opportunities for financial support. If you are interested in becoming involved, please reach out to Allison Cohen at allison.cohen@mila.quebec

Team Members

 

Collaborators

  • Guergana Savova 
  • Tim Miller
  • Danielle Bitterman
  • Zhuang Ma
  • Yang Xie 
array(1) { ["wp-wpml_current_language"]=> string(2) "en" }