Biasly

Artificial intelligence (AI) to detect and correct misogynistic language.

Logo of the project and photo of a woman working on a computer.

About the Project

Biasly is an AI research project leveraging cutting edge natural language processing algorithms to identify and correct misogynistic language in written text, whether such text is expressed overtly or covertly. 

Biasly is being developed to explain why language can be interpreted as misogynistic and suggest ways of rephrasing to reduce or remove the misogynistic implications.

How it Started

Biasly was originally devised by Andrea Jang, Carolyne Pelletier, Ines Moreno and Yasmeen Hitti during their summer training at the AI4Good Lab. Their goal was to build a machine learning tool capable of detecting and correcting gender bias in written text. To bring their idea to fruition, the team decided to undertake a Humanitarian AI Internship with Mila, where they developed a taxonomy for gender bias in text. Although their internship has since ended, Mila’s team of researchers has continued to pursue this work to fulfill Biasly’s vision.

Taking Biasly to a New Level

Biasly’s current team has been strategically assembled to ensure the responsible and high impact delivery of this work.

Our ML researchers are Anna Richter* (MSc in Cognitive Science) and Brooklyn Sheppard* (MSc in Speech and Language Processing).

The project is being managed by Senior Projects Manager, Allison Cohen, in collaboration with experts from the field; including Linguistics expert, Dr. Elizabeth Smith; Gender Studies specialist, Dr. Tamara Kneese; Natural Language Processing Advisor, Dr. Yue Dong and Language Processing Engineer, Carolyne Pelletier.

The project has also been fortunate to have had the insight and expertise of AI Researcher, Dr. Ioana Baldini and Gender Studies Expert, Dr. Alex Ketchum.

Dataset

In 2024, the Biasly team released the complete dataset of misogynistic sentences. You can access it using the link below:

View the dataset

Resources

BiaSWE: An Expert Annotated Dataset for Misogyny Detection in Swedish
Inspired by the Biasly project, AI Sweden has created a misogynistic dataset for the Swedish language.
Mila’s AI4Humanity and Biasly: Best Practices to Develop Socially Responsible AI
A case study on Biasly by the Rotman School of Business (University of Toronto)
Subtle Misogyny Detection and Mitigation: An Expert-Annotated Dataset
Publication from the project presented at the ACL conference.
Subtle Misogyny Detection and Mitigation: An Expert-Annotated Dataset
Publication from the project presented as a Spotlight at NeurIPS.

Meet the Team

Mila Members
Portrait of Allison Cohen
Senior Manager, Applied Projects
Collaborators
Dr. Yue Dong (University of California, Riverside)
Dr. Tamara Kneese (University of California, Berkeley)
Carolyne Pelletier (Mantium)
Dr. Elizabeth Smith (UQÀM)
*Dr. Ioana Baldini (former collaborator, IBM Research)
*Dr. Alexandra Ketchum (former collaborator, McGill University)
*Anna Richter (former collaborator, Mila)
*Brooklyn Sheppard (former collaborator, Mila)

Partners

This project has benefited from the generous support of DRW. 

Have questions about the project?