Portrait of Negar Rostamzadeh

Negar Rostamzadeh

Associate Industry Member
Senior Research Scientist, Google Brain Ethical AI Team
Research Topics
Computer Vision
Generative Models
Multimodal Learning

Biography

Negar Rostamzadeh is a Senior Research Scientist at Google Responsible AI team and an Associate Industrial member at Mila - Quebec Artificial Intelligence Institute. Her research primarily focuses on understanding the social implications of machine learning and evaluation systems, as well as developing equitable and fair ML systems.

Negar holds a deep interest in the creative applications of computer vision and their impact on society and artists. She is the founder and program chair of the workshop series, "Computer Vision for Fashion, Art, and Design," as well as "Ethical Considerations in Creative Applications," featured at Computer Vision venues from ECCV 2018 to CVPR 2023.

Before joining Google, Negar worked as a research scientist at Element AI (Service Now), where she specialized in efficient learning from limited data in computer vision and multi-modal problems.

She completed her PhD in 2017 at the University of Trento under the supervision of Prof. Nicu Sebe, focusing on Video Understanding problems. She also spent two years at MILA (2015-2017), working on attention mechanisms in videos, generative models, and video captioning under the guidance of Prof. Aaron Courville. In 2016, she had the opportunity to intern with Google's Machine Intelligence team.

Negar actively contributes to various community engagements within the AI community. She has served as the program chair for the workshop series, "Science meets Engineering of Deep Learning," at ICLR, FAccT, and NeurIPS. Since 2020, she has been a board member of the Montreal AI Symposium, and in 2019, she held the position of Senior Program Chair. Negar is also an Area Chair for Vision Conferences such as CVPR and ICCV, and gave multiple keynotes in various workshops and conferences.

Current Students

Master's Research - McGill University
Principal supervisor :

Publications

UNLEARNING GEO-CULTURAL STEREOTYPES IN MULTILINGUAL LLMS
Alireza Dehghanpour Farashah
Aditi Khandelwal
As multilingual generative models become more widely used, most safety and fairness evaluation techniques still focus on English-language re… (see more)sources, while overlooking important cross-cultural factors. This limitation raises concerns about fairness and safety, particularly regarding geoculturally situated stereotypes that hinder the models’ global inclusivity. In this work, we present preliminary findings on the impact of stereotype unlearning across languages, specifically in English, French, and Hindi. Using an adapted version of the SeeGULL dataset, we analyze how unlearning stereotypes in one language influences other languages within multilingual large language models. Our study evaluates two model families, Llama-3.1-8B and Aya-Expanse-8B, to assess whether unlearning in one linguistic context transfers across languages, potentially mitigating or exacerbating biases in multilingual settings.
What Secrets Do Your Manifolds Hold? Understanding the Local Geometry of Generative Models
Ahmed Imtiaz Humayun
Ibtihel Amara
Cristina Nader Vasconcelos
Candice Schumann
Deepak Ramachandran
Junfeng He
Mohammad Havaei
Katherine A Heller
What Secrets Do Your Manifolds Hold? Understanding the Local Geometry of Generative Models
Ahmed Imtiaz Humayun
Ibtihel Amara
Cristina Nader Vasconcelos
Deepak Ramachandran
Candice Schumann
Junfeng He
Katherine A Heller
Mohammad Havaei
Deep Generative Models are frequently used to learn continuous representations of complex data distributions using a finite number of sample… (see more)s. For any generative model, including pre-trained foundation models with GAN, Transformer or Diffusion architectures, generation performance can vary significantly based on which part of the learned data manifold is sampled. In this paper we study the post-training local geometry of the learned manifold and its relationship to generation outcomes for models ranging from toy settings to the latent decoder of the near state-of-the-art Stable Diffusion 1.4 Text-to-Image model. Building on the theory of continuous piecewise-linear (CPWL) generators, we characterize the local geometry in terms of three geometric descriptors - scaling (
The Case for Globalizing Fairness: A Mixed Methods Study on Colonialism, AI, and Health in Africa
Mercy Nyamewaa Asiedu
Awa Dieng
Iskandar Haykel
Stephen R. Pfohl
Chirag Nagpal
Maria Nagawa
Abigail Oppong
Sanmi Koyejo
Katherine Heller
With growing application of machine learning (ML) technologies in healthcare, there have been calls for developing techniques to understand … (see more)and mitigate biases these systems may exhibit. Fair-ness considerations in the development of ML-based solutions for health have particular implications for Africa, which already faces inequitable power imbalances between the Global North and South.This paper seeks to explore fairness for global health, with Africa as a case study. We conduct a scoping review to propose axes of disparities for fairness consideration in the African context and delineate where they may come into play in different ML-enabled medical modalities. We then conduct qualitative research studies with 672 general population study participants and 28 experts inML, health, and policy focused on Africa to obtain corroborative evidence on the proposed axes of disparities. Our analysis focuses on colonialism as the attribute of interest and examines the interplay between artificial intelligence (AI), health, and colonialism. Among the pre-identified attributes, we found that colonial history, country of origin, and national income level were specific axes of disparities that participants believed would cause an AI system to be biased.However, there was also divergence of opinion between experts and general population participants. Whereas experts generally expressed a shared view about the relevance of colonial history for the development and implementation of AI technologies in Africa, the majority of the general population participants surveyed did not think there was a direct link between AI and colonialism. Based on these findings, we provide practical recommendations for developing fairness-aware ML solutions for health in Africa.
A Toolbox for Surfacing Health Equity Harms and Biases in Large Language Models
Stephen R. Pfohl
Heather Cole-Lewis
Rory Sayres
Darlene Neal
Mercy Nyamewaa Asiedu
Awa Dieng
Nenad Tomasev
Qazi Mamunur Rashid
Shekoofeh Azizi
Liam G. McCoy
Leo Anthony Celi
Yun Liu
Mike Schaekermann
Alanna Walton
Alicia Parrish
Chirag Nagpal
Preeti Singh
Akeiylah Dewitt
Philip Mansfield … (see 10 more)
Sushant Prakash
Katherine Heller
Alan Karthikesalingam
Christopher Semturs
Joelle Barral
Greg Corrado
Yossi Matias
Jamila Smith-Loud
Ivor Horn
Karan Singhal
Nteasee: A mixed methods study of expert and general population perspectives on deploying AI for health in African countries
Mercy Nyamewaa Asiedu
Iskandar Haykel
Awa Dieng
K. Kauer
Tousif Ahmed
Florence Ofori
Charisma Chan
Stephen R. Pfohl
Katherine Heller
Understanding the Local Geometry of Generative Model Manifolds
Ahmed Imtiaz Humayun
Ibtihel Amara
Candice Schumann
Mohammad Havaei
Deep generative models learn continuous representations of complex data manifolds using a finite number of samples during training. For a pr… (see more)e-trained generative model, the common way to evaluate the quality of the manifold representation learned, is by computing global metrics like Fr\'echet Inception Distance using a large number of generated and real samples. However, generative model performance is not uniform across the learned manifold, e.g., for \textit{foundation models} like Stable Diffusion generation performance can vary significantly based on the conditioning or initial noise vector being denoised. In this paper we study the relationship between the \textit{local geometry of the learned manifold} and downstream generation. Based on the theory of continuous piecewise-linear (CPWL) generators, we use three geometric descriptors - scaling (
Position: Cracking the Code of Cascading Disparity Towards Marginalized Communities
Bias-inducing geometries: exactly solvable data model with fairness implications
Stefano Sarao Mannelli
Federica Gerace
Luca Saglietti
Machine learning (ML) may be oblivious to human bias but it is not immune to its perpetuation. Marginalisation and iniquitous group represen… (see more)tation are often traceable in the very data used for training, and may be reflected or even enhanced by the learning models. In this abstract, we aim to clarify the role played by data geometry in the emergence of ML bias. We introduce an exactly solvable high-dimensional model of data imbalance, where parametric control over the many bias-inducing factors allows for an extensive exploration of the bias inheritance mechanism. Through the tools of statistical physics, we analytically characterise the typical properties of learning models trained in this synthetic framework and obtain exact predictions for the observables that are commonly employed for fairness assessment. Simplifying the nature of the problem to its minimal components, we can retrace and unpack typical unfairness behaviour observed on real-world datasets
Bias-inducing geometries: exactly solvable data model with fairness implications
Stefano Sarao Mannelli
Federica Gerace
Luca Saglietti
Machine learning (ML) may be oblivious to human bias but it is not immune to its perpetuation. Marginalisation and iniquitous group represen… (see more)tation are often traceable in the very data used for training, and may be reflected or even enhanced by the learning models. In this abstract, we aim to clarify the role played by data geometry in the emergence of ML bias. We introduce an exactly solvable high-dimensional model of data imbalance, where parametric control over the many bias-inducing factors allows for an extensive exploration of the bias inheritance mechanism. Through the tools of statistical physics, we analytically characterise the typical properties of learning models trained in this synthetic framework and obtain exact predictions for the observables that are commonly employed for fairness assessment. Simplifying the nature of the problem to its minimal components, we can retrace and unpack typical unfairness behaviour observed on real-world datasets
On The Local Geometry of Deep Generative Manifolds
Ahmed Imtiaz Humayun
Ibtihel Amara
Candice Schumann
Mohammad Havaei
In this paper, we study theoretically inspired local geometric descriptors of the data manifolds approximated by pre-trained generative mode… (see more)ls. The descriptors – local scaling (ψ), local rank (ν), and local complexity (δ) — characterize the uncertainty, dimensionality, and smoothness on the learned manifold, using only the network weights and architecture. We investigate and emphasize their critical role in understanding generative models. Our analysis reveals that the local geometry is intricately linked to the quality and diversity of generated outputs. Additionally, we see that the geometric properties are distinct for out-of-distribution (OOD) inputs as well as for prompts memorized by Stable Diffusion, showing the possible application of our proposed descriptors for downstream detection and assessment of pre-trained generative models.
A toolbox for surfacing health equity harms and biases in large language models
Stephen R. Pfohl
Heather Cole-Lewis
Rory A Sayres
Darlene Neal
Mercy Nyamewaa Asiedu
Awa Dieng
Nenad Tomasev
Qazi Mamunur Rashid
Shekoofeh Azizi
Liam G. McCoy
L. A. Celi
Yun Liu
Mike Schaekermann
Alanna Walton
Alicia Parrish
Chirag Nagpal
Preeti Singh
Akeiylah Dewitt
P. A. Mansfield … (see 10 more)
Sushant Prakash
Katherine Heller
Alan Karthikesalingam
Christopher Semturs
Joelle Barral
Greg C. Corrado
Yossi Matias
Jamila Smith-Loud
Ivor Horn
Karan Singhal