Portrait de Negar Rostamzadeh

Negar Rostamzadeh

Membre industriel associé
Chercheuse scientifique principale, Google Brain Ethical AI Team
Sujets de recherche
Apprentissage multimodal
Modèles génératifs
Vision par ordinateur

Biographie

Negar Rostamzadeh est chercheuse principale au sein de l'équipe Google Responsible AI et membre industrielle associée à Mila - Institut québécois d'intelligence artificielle. Ses recherches portent principalement sur la compréhension des implications sociales de l'apprentissage automatique et des systèmes d'évaluation, ainsi que sur le développement de systèmes d'intelligence artificielle équitables et justes.

Negar s'intéresse de près aux applications créatives de la vision par ordinateur et à leur impact sur la société et les artistes. Elle est la fondatrice et la présidente du programme de la série d'ateliers « Computer Vision for Fashion, Art, and Design », ainsi que « Ethical Considerations in Creative Applications », présentés sur les sites de Computer Vision depuis ECCV 2018 jusqu'à CVPR 2023.

Avant de rejoindre Google, Negar a travaillé comme chercheuse chez Element AI (Service Now), où elle s'est spécialisée dans l'apprentissage efficace à partir de données limitées en vision par ordinateur et dans les problèmes multimodaux.

Elle a obtenu son doctorat en 2017 à l'Université de Trente sous la supervision du professeur Nicu Sebe, en se concentrant sur les problèmes de compréhension vidéo. Elle a également passé deux ans à MILA (2015-2017), travaillant sur les mécanismes d'attention dans les vidéos, les modèles génératifs et le sous-titrage vidéo sous la direction du Prof. Aaron Courville. En 2016, elle a eu l'occasion de faire un stage au sein de l'équipe Machine Intelligence de Google.

Negar contribue activement à divers engagements communautaires au sein de la communauté de l'IA. Elle a été présidente du programme pour la série d'ateliers « Science meets Engineering of Deep Learning » à l'ICLR, FAccT et NeurIPS. Depuis 2020, elle est membre du conseil d'administration du Symposium d'IA de Montréal et, en 2019, elle a occupé le poste de présidente principale du programme. Negar est également Area Chair pour des conférences sur la vision telles que CVPR et ICCV, et a donné plusieurs keynotes dans divers ateliers et conférences.

Étudiants actuels

Maîtrise recherche - McGill
Superviseur⋅e principal⋅e :

Publications

Understanding the Local Geometry of Generative Model Manifolds
Ahmed Imtiaz Humayun
Ibtihel Amara
Candice Schumann
Mohammad Havaei
Deep generative models learn continuous representations of complex data manifolds using a finite number of samples during training. For a pr… (voir plus)e-trained generative model, the common way to evaluate the quality of the manifold representation learned, is by computing global metrics like Fr\'echet Inception Distance using a large number of generated and real samples. However, generative model performance is not uniform across the learned manifold, e.g., for \textit{foundation models} like Stable Diffusion generation performance can vary significantly based on the conditioning or initial noise vector being denoised. In this paper we study the relationship between the \textit{local geometry of the learned manifold} and downstream generation. Based on the theory of continuous piecewise-linear (CPWL) generators, we use three geometric descriptors - scaling (
Position: Cracking the Code of Cascading Disparity Towards Marginalized Communities
Bias-inducing geometries: exactly solvable data model with fairness implications
Stefano Sarao Mannelli
Federica Gerace
Luca Saglietti
Machine learning (ML) may be oblivious to human bias but it is not immune to its perpetuation. Marginalisation and iniquitous group represen… (voir plus)tation are often traceable in the very data used for training, and may be reflected or even enhanced by the learning models. In this abstract, we aim to clarify the role played by data geometry in the emergence of ML bias. We introduce an exactly solvable high-dimensional model of data imbalance, where parametric control over the many bias-inducing factors allows for an extensive exploration of the bias inheritance mechanism. Through the tools of statistical physics, we analytically characterise the typical properties of learning models trained in this synthetic framework and obtain exact predictions for the observables that are commonly employed for fairness assessment. Simplifying the nature of the problem to its minimal components, we can retrace and unpack typical unfairness behaviour observed on real-world datasets
On The Local Geometry of Deep Generative Manifolds
Ahmed Imtiaz Humayun
Ibtihel Amara
Candice Schumann
Mohammad Havaei
In this paper, we study theoretically inspired local geometric descriptors of the data manifolds approximated by pre-trained generative mode… (voir plus)ls. The descriptors – local scaling (ψ), local rank (ν), and local complexity (δ) — characterize the uncertainty, dimensionality, and smoothness on the learned manifold, using only the network weights and architecture. We investigate and emphasize their critical role in understanding generative models. Our analysis reveals that the local geometry is intricately linked to the quality and diversity of generated outputs. Additionally, we see that the geometric properties are distinct for out-of-distribution (OOD) inputs as well as for prompts memorized by Stable Diffusion, showing the possible application of our proposed descriptors for downstream detection and assessment of pre-trained generative models.
A Toolbox for Surfacing Health Equity Harms and Biases in Large Language Models
Stephen R. Pfohl
Heather Cole-Lewis
Rory A Sayres
Darlene Neal
Mercy Nyamewaa Asiedu
Awa Dieng
Nenad Tomašev
Qazi Mamunur Rashid
Shekoofeh Azizi
Liam G. McCoy
L. A. Celi
Yun Liu
Mike Schaekermann
Alanna Walton
Alicia Parrish
Chirag Nagpal
Preeti Singh
Akeiylah Dewitt
P. A. Mansfield … (voir 10 de plus)
Sushant Prakash
Katherine Heller
Alan Karthikesalingam
Christopher Semturs
Joelle Barral
Greg C. Corrado
Yossi Matias
Jamila Smith-Loud
Ivor Horn
Karan Singhal
The Case for Globalizing Fairness: A Mixed Methods Study on Colonialism, AI, and Health in Africa
Mercy Nyamewaa Asiedu
Awa Dieng
Alexander Haykel
Stephen R. Pfohl
Chirag Nagpal
Maria Nagawa
Abigail Oppong
Sanmi Koyejo
Katherine Heller
With growing application of machine learning (ML) technologies in healthcare, there have been calls for developing techniques to understand … (voir plus)and mitigate biases these systems may exhibit. Fair-ness considerations in the development of ML-based solutions for health have particular implications for Africa, which already faces inequitable power imbalances between the Global North and South.This paper seeks to explore fairness for global health, with Africa as a case study. We conduct a scoping review to propose axes of disparities for fairness consideration in the African context and delineate where they may come into play in different ML-enabled medical modalities. We then conduct qualitative research studies with 672 general population study participants and 28 experts inML, health, and policy focused on Africa to obtain corroborative evidence on the proposed axes of disparities. Our analysis focuses on colonialism as the attribute of interest and examines the interplay between artificial intelligence (AI), health, and colonialism. Among the pre-identified attributes, we found that colonial history, country of origin, and national income level were specific axes of disparities that participants believed would cause an AI system to be biased.However, there was also divergence of opinion between experts and general population participants. Whereas experts generally expressed a shared view about the relevance of colonial history for the development and implementation of AI technologies in Africa, the majority of the general population participants surveyed did not think there was a direct link between AI and colonialism. Based on these findings, we provide practical recommendations for developing fairness-aware ML solutions for health in Africa.
The value of standards for health datasets in artificial intelligence-based applications
Anmol Arora
Joseph E. Alderman
Joanne Palmer
Shaswath Ganapathi
Elinor Laws
Melissa D. McCradden
Lauren Oakden-Rayner
Stephen R. Pfohl
Marzyeh Ghassemi
Francis McKay
Darren Treanor
Bilal Mateen
Jacqui Gath
Adewole O. Adebajo
Stephanie Kuku
Rubeta Matin
Katherine Heller
Elizabeth Sapey
Neil J. Sebire … (voir 4 de plus)
Heather Cole-Lewis
Melanie Calvert
Alastair Denniston
Xiaoxuan Liu
Breaking Barriers to Creative Expression: Co-Designing and Implementing an Accessible Text-to-Image Interface
Atieh Taheri
Mohammad Izadi
Gururaj Shriram
Shaun Kane
Text-to-image generation models have grown in popularity due to their ability to produce high-quality images from a text prompt. One use for… (voir plus) this technology is to enable the creation of more accessible art creation software. In this paper, we document the development of an alternative user interface that reduces the typing effort needed to enter image prompts by providing suggestions from a large language model, developed through iterative design and testing within the project team. The results of this testing demonstrate how generative text models can support the accessibility of text-to-image models, enabling users with a range of abilities to create visual art.
Beyond the ML Model: Applying Safety Engineering Frameworks to Text-to-Image Development
Shalaleh Rismani
Renee Shelby
Andrew J Smart
Renelito Delos Santos
Identifying potential social and ethical risks in emerging machine learning (ML) models and their applications remains challenging. In this … (voir plus)work, we applied two well-established safety engineering frameworks (FMEA, STPA) to a case study involving text-to-image models at three stages of the ML product development pipeline: data processing, integration of a T2I model with other models, and use. Results of our analysis demonstrate the safety frameworks – both of which are not designed explicitly examine social and ethical risks – can uncover failure and hazards that pose social and ethical risks. We discovered a broad range of failures and hazards (i.e., functional, social, and ethical) by analyzing interactions (i.e., between different ML models in the product, between the ML product and user, and between development teams) and processes (i.e., preparation of training data or workflows for using an ML service/product). Our findings underscore the value and importance of examining beyond an ML model in examining social and ethical risks, especially when we have minimal information about an ML model.
Sociotechnical Harms of Algorithmic Systems: Scoping a Taxonomy for Harm Reduction
Renee Shelby
Shalaleh Rismani
Kathryn Henne
Paul Nicholas
N'Mah Yilla-Akbari
Jess Gallegos
Andrew J Smart
Emilio Garcia
Gurleen Virk
From Plane Crashes to Algorithmic Harm: Applicability of Safety Engineering Frameworks for Responsible ML
Shalaleh Rismani
Renee Shelby
Andrew J Smart
Edgar Jatho
Joshua A. Kroll
Healthsheet: Development of a Transparency Artifact for Health Datasets
Diana Mincu
Subhrajit Roy
Andrew J Smart
Lauren Wilcox
Mahima Pushkarna
Jessica Schrouff
Razvan Amironesei
Nyalleng Moorosi
Katherine Heller