Portrait de Alexandra Olteanu

Alexandra Olteanu

Membre industriel associé
Chercheuse principale et co-fondatrice de l'équipe FATE, apprentissage profond et automatisé, Microsoft Research, Montréal
Sujets de recherche
Recherche d'information
Traitement du langage naturel

Publications

FairPrism: Evaluating Fairness-Related Harms in Text Generation
Eve Fleisig
Aubrie Amstutz
Chad Atalla
Su Lin Blodgett
Hal Daumé III
Emily Sheng
Dan Vann
Hanna Wallach
It is critical to measure and mitigate fairness-related harms caused by AI text generation systems, including stereotyping and demeaning har… (voir plus)ms. To that end, we introduce FairPrism, a dataset of 5,000 examples of AI-generated English text with detailed human annotations covering a diverse set of harms relating to gender and sexuality. FairPrism aims to address several limitations of existing datasets for measuring and mitigating fairness-related harms, including improved transparency, clearer specification of dataset coverage, and accounting for annotator disagreement and harms that are context-dependent. FairPrism’s annotations include the extent of stereotyping and demeaning harms, the demographic groups targeted, and appropriateness for different applications. The annotations also include specific harms that occur in interactive contexts and harms that raise normative concerns when the “speaker” is an AI system. Due to its precision and granularity, FairPrism can be used to diagnose (1) the types of fairness-related harms that AI text generation systems cause, and (2) the potential limitations of mitigation methods, both of which we illustrate through case studies. Finally, the process we followed to develop FairPrism offers a recipe for building improved datasets for measuring and mitigating harms caused by AI systems.
AHA!: Facilitating AI Impact Assessment by Generating Examples of Harms
Zana Buçinca
Chau Minh Pham
Maurice Jakesch
Marco Túlio Ribeiro
Saleema Amershi
While demands for change and accountability for harmful AI consequences mount, foreseeing the downstream effects of deploying AI systems rem… (voir plus)ains a challenging task. We developed AHA! (Anticipating Harms of AI), a generative framework to assist AI practitioners and decision-makers in anticipating potential harms and unintended consequences of AI systems prior to development or deployment. Given an AI deployment scenario, AHA! generates descriptions of possible harms for different stakeholders. To do so, AHA! systematically considers the interplay between common problematic AI behaviors as well as their potential impacts on different stakeholders, and narrates these conditions through vignettes. These vignettes are then filled in with descriptions of possible harms by prompting crowd workers and large language models. By examining 4113 harms surfaced by AHA! for five different AI deployment scenarios, we found that AHA! generates meaningful examples of harms, with different problematic AI behaviors resulting in different types of harms. Prompting both crowds and a large language model with the vignettes resulted in more diverse examples of harms than those generated by either the crowd or the model alone. To gauge AHA!'s potential practical utility, we also conducted semi-structured interviews with responsible AI professionals (N=9). Participants found AHA!'s systematic approach to surfacing harms important for ethical reflection and discovered meaningful stakeholders and harms they believed they would not have thought of otherwise. Participants, however, differed in their opinions about whether AHA! should be used upfront or as a secondary-check and noted that AHA! may shift harm anticipation from an ideation problem to a potentially demanding review problem. Drawing on our results, we discuss design implications of building tools to help practitioners envision possible harms.
Can Workers Meaningfully Consent to Workplace Wellbeing Technologies?
Shreya Chowdhary
Anna Kawakami
Jina Suh
Mary L Gray
Koustuv Saha
Human-Centered Responsible Artificial Intelligence: Current & Future Trends
Mohammad Tahaei
Marios Constantinides
Daniele Quercia
Sean Kennedy
Michael Muller
Simone Stumpf
Q. Vera Liao
Ricardo Baeza-Yates
Lora Aroyo
Jess Holbrook
Ewa Luger
Michael Madaio
Ilana Golbin Blumenfeld
Maria De-Arteaga
Jessica Vitak
Responsible AI Considerations in Text Summarization Research: A Review of Current Practices
Yu Lu Liu
Meng Cao
Su Lin Blodgett
Adam Trischler
The KITMUS Test: Evaluating Knowledge Integration from Multiple Sources
Akshatha Arodi
Martin Pömsl
Kaheer Suleman
Adam Trischler
Many state-of-the-art natural language understanding (NLU) models are based on pretrained neural language models. These models often make in… (voir plus)ferences using information from multiple sources. An important class of such inferences are those that require both background knowledge, presumably contained in a model’s pretrained parameters, and instance-specific information that is supplied at inference time. However, the integration and reasoning abilities of NLU models in the presence of multiple knowledge sources have been largely understudied. In this work, we propose a test suite of coreference resolution subtasks that require reasoning over multiple facts. These subtasks differ in terms of which knowledge sources contain the relevant facts. We also introduce subtasks where knowledge is present only at inference time using fictional knowledge. We evaluate state-of-the-art coreference resolution models on our dataset. Our results indicate that several models struggle to reason on-the-fly over knowledge observed both at pretrain time and at inference time. However, with task-specific training, a subset of models demonstrates the ability to integrate certain knowledge types from multiple sources. Still, even the best performing models seem to have difficulties with reliably integrating knowledge presented only at inference time.
ADEPT: An Adjective-Dependent Plausibility Task
Ali Emami
Ian Porada
Kaheer Suleman
Adam Trischler