Portrait de Negar Rostamzadeh

Negar Rostamzadeh

Membre industriel associé
Chercheuse scientifique principale, Google Brain Ethical AI Team

Biographie

Negar Rostamzadeh est chercheuse principale au sein de l'équipe Google Responsible AI et membre industrielle associée à Mila - Institut québécois d'intelligence artificielle. Ses recherches portent principalement sur la compréhension des implications sociales de l'apprentissage automatique et des systèmes d'évaluation, ainsi que sur le développement de systèmes d'intelligence artificielle équitables et justes.

Negar s'intéresse de près aux applications créatives de la vision par ordinateur et à leur impact sur la société et les artistes. Elle est la fondatrice et la présidente du programme de la série d'ateliers « Computer Vision for Fashion, Art, and Design », ainsi que « Ethical Considerations in Creative Applications », présentés sur les sites de Computer Vision depuis ECCV 2018 jusqu'à CVPR 2023.

Avant de rejoindre Google, Negar a travaillé comme chercheuse chez Element AI (Service Now), où elle s'est spécialisée dans l'apprentissage efficace à partir de données limitées en vision par ordinateur et dans les problèmes multimodaux.

Elle a obtenu son doctorat en 2017 à l'Université de Trente sous la supervision du professeur Nicu Sebe, en se concentrant sur les problèmes de compréhension vidéo. Elle a également passé deux ans à MILA (2015-2017), travaillant sur les mécanismes d'attention dans les vidéos, les modèles génératifs et le sous-titrage vidéo sous la direction du Prof. Aaron Courville. En 2016, elle a eu l'occasion de faire un stage au sein de l'équipe Machine Intelligence de Google.

Negar contribue activement à divers engagements communautaires au sein de la communauté de l'IA. Elle a été présidente du programme pour la série d'ateliers « Science meets Engineering of Deep Learning » à l'ICLR, FAccT et NeurIPS. Depuis 2020, elle est membre du conseil d'administration du Symposium d'IA de Montréal et, en 2019, elle a occupé le poste de présidente principale du programme. Negar est également Area Chair pour des conférences sur la vision telles que CVPR et ICCV, et a donné plusieurs keynotes dans divers ateliers et conférences.

Publications

A Toolbox for Surfacing Health Equity Harms and Biases in Large Language Models
Stephen R. Pfohl
Heather Cole-Lewis
Rory A Sayres
Darlene Neal
Mercy Nyamewaa Asiedu
Awa Dieng
Nenad Tomašev
Qazi Mamunur Rashid
Shekoofeh Azizi
Liam G. McCoy
L. A. Celi
Yun Liu
Mike Schaekermann
Alanna Walton
Alicia Parrish
Chirag Nagpal
Preeti Singh
Akeiylah Dewitt
P. A. Mansfield … (voir 10 de plus)
Sushant Prakash
Katherine Heller
Alan Karthikesalingam
Christopher Semturs
Joelle Barral
Greg C. Corrado
Yossi Matias
Jamila Smith-Loud
Ivor Horn
Karan Singhal
The Case for Globalizing Fairness: A Mixed Methods Study on Colonialism, AI, and Health in Africa
Mercy Nyamewaa Asiedu
Awa Dieng
Alexander Haykel
Stephen R. Pfohl
Chirag Nagpal
Maria Nagawa
Abigail Oppong
Sanmi Koyejo
Katherine Heller
With growing application of machine learning (ML) technologies in healthcare, there have been calls for developing techniques to understand … (voir plus)and mitigate biases these systems may exhibit. Fair-ness considerations in the development of ML-based solutions for health have particular implications for Africa, which already faces inequitable power imbalances between the Global North and South.This paper seeks to explore fairness for global health, with Africa as a case study. We conduct a scoping review to propose axes of disparities for fairness consideration in the African context and delineate where they may come into play in different ML-enabled medical modalities. We then conduct qualitative research studies with 672 general population study participants and 28 experts inML, health, and policy focused on Africa to obtain corroborative evidence on the proposed axes of disparities. Our analysis focuses on colonialism as the attribute of interest and examines the interplay between artificial intelligence (AI), health, and colonialism. Among the pre-identified attributes, we found that colonial history, country of origin, and national income level were specific axes of disparities that participants believed would cause an AI system to be biased.However, there was also divergence of opinion between experts and general population participants. Whereas experts generally expressed a shared view about the relevance of colonial history for the development and implementation of AI technologies in Africa, the majority of the general population participants surveyed did not think there was a direct link between AI and colonialism. Based on these findings, we provide practical recommendations for developing fairness-aware ML solutions for health in Africa.
The value of standards for health datasets in artificial intelligence-based applications
Anmol Arora
Joseph E. Alderman
Joanne Palmer
Shaswath Ganapathi
Elinor Laws
Melissa D. McCradden
Lauren Oakden-Rayner
Stephen R. Pfohl
Marzyeh Ghassemi
Francis McKay
Darren Treanor
Bilal Mateen
Jacqui Gath
Adewole O. Adebajo
Stephanie Kuku
Rubeta Matin
Katherine Heller
Elizabeth Sapey
Neil J. Sebire … (voir 4 de plus)
Heather Cole-Lewis
Melanie Calvert
Alastair Denniston
Xiaoxuan Liu
Breaking Barriers to Creative Expression: Co-Designing and Implementing an Accessible Text-to-Image Interface
Atieh Taheri
Mohammad Izadi
Gururaj Shriram
Shaun Kane
Text-to-image generation models have grown in popularity due to their ability to produce high-quality images from a text prompt. One use for… (voir plus) this technology is to enable the creation of more accessible art creation software. In this paper, we document the development of an alternative user interface that reduces the typing effort needed to enter image prompts by providing suggestions from a large language model, developed through iterative design and testing within the project team. The results of this testing demonstrate how generative text models can support the accessibility of text-to-image models, enabling users with a range of abilities to create visual art.
Beyond the ML Model: Applying Safety Engineering Frameworks to Text-to-Image Development
Shalaleh Rismani
Renee Shelby
Andrew J Smart
Renelito Delos Santos
Identifying potential social and ethical risks in emerging machine learning (ML) models and their applications remains challenging. In this … (voir plus)work, we applied two well-established safety engineering frameworks (FMEA, STPA) to a case study involving text-to-image models at three stages of the ML product development pipeline: data processing, integration of a T2I model with other models, and use. Results of our analysis demonstrate the safety frameworks – both of which are not designed explicitly examine social and ethical risks – can uncover failure and hazards that pose social and ethical risks. We discovered a broad range of failures and hazards (i.e., functional, social, and ethical) by analyzing interactions (i.e., between different ML models in the product, between the ML product and user, and between development teams) and processes (i.e., preparation of training data or workflows for using an ML service/product). Our findings underscore the value and importance of examining beyond an ML model in examining social and ethical risks, especially when we have minimal information about an ML model.
Sociotechnical Harms of Algorithmic Systems: Scoping a Taxonomy for Harm Reduction
Renee Shelby
Shalaleh Rismani
Kathryn Henne
Paul Nicholas
N'Mah Yilla-Akbari
Jess Gallegos
Andrew J Smart
Emilio Garcia
Gurleen Virk
From Plane Crashes to Algorithmic Harm: Applicability of Safety Engineering Frameworks for Responsible ML
Shalaleh Rismani
Renee Shelby
Andrew J Smart
Edgar Jatho
Joshua A. Kroll
Healthsheet: Development of a Transparency Artifact for Health Datasets
Diana Mincu
Subhrajit Roy
Andrew J Smart
Lauren Wilcox
Mahima Pushkarna
Jessica Schrouff
Razvan Amironesei
Nyalleng Moorosi
Katherine Heller
Bias-inducing geometries: an exactly solvable data model with fairness implications
Stefano Sarao Mannelli
Federica Gerace
Luca Saglietti
Healthsheet: Development of a Transparency Artifact for Health Datasets
Diana Mincu
Subhrajit Roy
Andrew J Smart
Lauren Wilcox
Mahima Pushkarna
Jessica Schrouff
Razvan Amironesei
Nyalleng Moorosi
Katherine Heller
Machine learning (ML) approaches have demonstrated promising results in a wide range of healthcare applications. Data plays a crucial role i… (voir plus)n developing ML-based healthcare systems that directly affect people’s lives. Many of the ethical issues surrounding the use of ML in healthcare stem from structural inequalities underlying the way we collect, use, and handle data. Developing guidelines to improve documentation practices regarding the creation, use, and maintenance of ML healthcare datasets is therefore of critical importance. In this work, we introduce Healthsheet, a contextualized adaptation of the original datasheet questionnaire [22] for health-specific applications. Through a series of semi-structured interviews, we adapt the datasheets for healthcare data documentation. As part of the Healthsheet development process and to understand the obstacles researchers face in creating datasheets, we worked with three publicly-available healthcare datasets as our case studies, each with different types of structured data: Electronic health Records (EHR), clinical trial study data, and smartphone-based performance outcome measures. Our findings from the interviewee study and case studies show 1) that datasheets should be contextualized for healthcare, 2) that despite incentives to adopt accountability practices such as datasheets, there is a lack of consistency in the broader use of these practices 3) how the ML for health community views datasheets and particularly Healthsheets as diagnostic tool to surface the limitations and strength of datasets and 4) the relative importance of different fields in the datasheet to healthcare concerns.
Sociotechnical Harms: Scoping a Taxonomy for Harm Reduction
Renee Shelby
Shalaleh Rismani
Kathryn Henne
Paul Nicholas
N'mah Fodiatu Yilla
Jess Gallegos
Andrew J Smart
Emilio Garcia
Gurleen Virk