Portrait of Negar Rostamzadeh

Negar Rostamzadeh

Associate Industry Member
Senior Research Scientist, Google Brain Ethical AI Team
Research Topics
Computer Vision
Generative Models
Multimodal Learning

Biography

Negar Rostamzadeh is a Senior Research Scientist at Google Responsible AI team and an Associate Industrial member at Mila - Quebec Artificial Intelligence Institute. Her research primarily focuses on understanding the social implications of machine learning and evaluation systems, as well as developing equitable and fair ML systems.

Negar holds a deep interest in the creative applications of computer vision and their impact on society and artists. She is the founder and program chair of the workshop series, "Computer Vision for Fashion, Art, and Design," as well as "Ethical Considerations in Creative Applications," featured at Computer Vision venues from ECCV 2018 to CVPR 2023.

Before joining Google, Negar worked as a research scientist at Element AI (Service Now), where she specialized in efficient learning from limited data in computer vision and multi-modal problems.

She completed her PhD in 2017 at the University of Trento under the supervision of Prof. Nicu Sebe, focusing on Video Understanding problems. She also spent two years at MILA (2015-2017), working on attention mechanisms in videos, generative models, and video captioning under the guidance of Prof. Aaron Courville. In 2016, she had the opportunity to intern with Google's Machine Intelligence team.

Negar actively contributes to various community engagements within the AI community. She has served as the program chair for the workshop series, "Science meets Engineering of Deep Learning," at ICLR, FAccT, and NeurIPS. Since 2020, she has been a board member of the Montreal AI Symposium, and in 2019, she held the position of Senior Program Chair. Negar is also an Area Chair for Vision Conferences such as CVPR and ICCV, and gave multiple keynotes in various workshops and conferences.

Current Students

Master's Research - McGill University
Principal supervisor :

Publications

Healthsheet: Development of a Transparency Artifact for Health Datasets
Diana Mincu
Subhrajit Roy
Andrew J Smart
Lauren Wilcox
Mahima Pushkarna
Jessica Schrouff
Razvan Amironesei
Nyalleng Moorosi
Katherine Heller
Bias-inducing geometries: an exactly solvable data model with fairness implications
Stefano Sarao Mannelli
Federica Gerace
Luca Saglietti
Healthsheet: Development of a Transparency Artifact for Health Datasets
Diana Mincu
Subhrajit Roy
Andrew J Smart
Lauren Wilcox
Mahima Pushkarna
Jessica Schrouff
Razvan Amironesei
Nyalleng Moorosi
Katherine Heller
Machine learning (ML) approaches have demonstrated promising results in a wide range of healthcare applications. Data plays a crucial role i… (see more)n developing ML-based healthcare systems that directly affect people’s lives. Many of the ethical issues surrounding the use of ML in healthcare stem from structural inequalities underlying the way we collect, use, and handle data. Developing guidelines to improve documentation practices regarding the creation, use, and maintenance of ML healthcare datasets is therefore of critical importance. In this work, we introduce Healthsheet, a contextualized adaptation of the original datasheet questionnaire [22] for health-specific applications. Through a series of semi-structured interviews, we adapt the datasheets for healthcare data documentation. As part of the Healthsheet development process and to understand the obstacles researchers face in creating datasheets, we worked with three publicly-available healthcare datasets as our case studies, each with different types of structured data: Electronic health Records (EHR), clinical trial study data, and smartphone-based performance outcome measures. Our findings from the interviewee study and case studies show 1) that datasheets should be contextualized for healthcare, 2) that despite incentives to adopt accountability practices such as datasheets, there is a lack of consistency in the broader use of these practices 3) how the ML for health community views datasheets and particularly Healthsheets as diagnostic tool to surface the limitations and strength of datasets and 4) the relative importance of different fields in the datasheet to healthcare concerns.
Sociotechnical Harms: Scoping a Taxonomy for Harm Reduction
Renee Shelby
Shalaleh Rismani
Kathryn Henne
Paul Nicholas
N'mah Fodiatu Yilla
Jess Gallegos
Andrew J Smart
Emilio Garcia
Gurleen Virk