Portrait de Negar Rostamzadeh

Negar Rostamzadeh

Membre industriel associé
Chercheuse scientifique principale, Google Brain Ethical AI Team
Sujets de recherche
Apprentissage multimodal
Modèles génératifs
Vision par ordinateur

Biographie

Negar Rostamzadeh est chercheuse principale au sein de l'équipe Google Responsible AI et membre industrielle associée à Mila - Institut québécois d'intelligence artificielle. Ses recherches portent principalement sur la compréhension des implications sociales de l'apprentissage automatique et des systèmes d'évaluation, ainsi que sur le développement de systèmes d'intelligence artificielle équitables et justes.

Negar s'intéresse de près aux applications créatives de la vision par ordinateur et à leur impact sur la société et les artistes. Elle est la fondatrice et la présidente du programme de la série d'ateliers « Computer Vision for Fashion, Art, and Design », ainsi que « Ethical Considerations in Creative Applications », présentés sur les sites de Computer Vision depuis ECCV 2018 jusqu'à CVPR 2023.

Avant de rejoindre Google, Negar a travaillé comme chercheuse chez Element AI (Service Now), où elle s'est spécialisée dans l'apprentissage efficace à partir de données limitées en vision par ordinateur et dans les problèmes multimodaux.

Elle a obtenu son doctorat en 2017 à l'Université de Trente sous la supervision du professeur Nicu Sebe, en se concentrant sur les problèmes de compréhension vidéo. Elle a également passé deux ans à MILA (2015-2017), travaillant sur les mécanismes d'attention dans les vidéos, les modèles génératifs et le sous-titrage vidéo sous la direction du Prof. Aaron Courville. En 2016, elle a eu l'occasion de faire un stage au sein de l'équipe Machine Intelligence de Google.

Negar contribue activement à divers engagements communautaires au sein de la communauté de l'IA. Elle a été présidente du programme pour la série d'ateliers « Science meets Engineering of Deep Learning » à l'ICLR, FAccT et NeurIPS. Depuis 2020, elle est membre du conseil d'administration du Symposium d'IA de Montréal et, en 2019, elle a occupé le poste de présidente principale du programme. Negar est également Area Chair pour des conférences sur la vision telles que CVPR et ICCV, et a donné plusieurs keynotes dans divers ateliers et conférences.

Étudiants actuels

Maîtrise recherche - McGill
Superviseur⋅e principal⋅e :

Publications

Reinforced Imitation in Heterogeneous Action Space
Konrad Żołna
Sungjin Ahn
Pedro O. Pinheiro
Imitation learning is an effective alternative approach to learn a policy when the reward function is sparse. In this paper, we consider a c… (voir plus)hallenging setting where an agent and an expert use different actions from each other. We assume that the agent has access to a sparse reward function and state-only expert observations. We propose a method which gradually balances between the imitation learning cost and the reinforcement learning objective. In addition, this method adapts the agent's policy based on either mimicking expert behavior or maximizing sparse reward. We show, through navigation scenarios, that (i) an agent is able to efficiently leverage sparse rewards to outperform standard state-only imitation learning, (ii) it can learn a policy even when its actions are different from the expert, and (iii) the performance of the agent is not bounded by that of the expert, due to the optimized usage of sparse rewards.
Towards Standardization of Data Licenses: The Montreal Data License
Misha Benjamin
P. Gagnon
Alex Shee
This paper provides a taxonomy for the licensing of data in the fields of artificial intelligence and machine learning. The paper's goal is … (voir plus)to build towards a common framework for data licensing akin to the licensing of open source software. Increased transparency and resolving conceptual ambiguities in existing licensing language are two noted benefits of the approach proposed in the paper. In parallel, such benefits may help foster fairer and more efficient markets for data through bringing about clearer tools and concepts that better define how data can be used in the fields of AI and ML. The paper's approach is summarized in a new family of data license language - \textit{the Montreal Data License (MDL)}. Alongside this new license, the authors and their collaborators have developed a web-based tool to generate license language espousing the taxonomies articulated in this paper.
Hierarchical Adversarially Learned Inference
Ishmael Belghazi
Sai Rajeswar
Olivier Mastropietro
Jovana Mitrovic
We propose a novel hierarchical generative model with a simple Markovian structure and a corresponding inference model. Both the generative … (voir plus)and inference model are trained using the adversarial learning paradigm. We demonstrate that the hierarchical structure supports the learning of progressively more abstract representations as well as providing semantically meaningful reconstructions with different levels of fidelity. Furthermore, we show that minimizing the Jensen-Shanon divergence between the generative and inference network is enough to minimize the reconstruction error. The resulting semantically meaningful hierarchical latent structure discovery is exemplified on the CelebA dataset. There, we show that the features learned by our model in an unsupervised way outperform the best handcrafted features. Furthermore, the extracted features remain competitive when compared to several recent deep supervised approaches on an attribute prediction task on CelebA. Finally, we leverage the model's inference network to achieve state-of-the-art performance on a semi-supervised variant of the MNIST digit classification task.
Deep Complex Networks
Chiheb Trabelsi
Olexa Bilaniuk
Ying Zhang
Dmitriy Serdyuk
Sandeep Subramanian
Joao Felipe Santos
Soroush Mehri
At present, the vast majority of building blocks, techniques, and architectures for deep learning are based on real-valued operations and re… (voir plus)presentations. However, recent work on recurrent neural networks and older fundamental theoretical analysis suggests that complex numbers could have a richer representational capacity and could also facilitate noise-robust memory retrieval mechanisms. Despite their attractive properties and potential for opening up entirely new neural architectures, complex-valued deep neural networks have been marginalized due to the absence of the building blocks required to design such models. In this work, we provide the key atomic components for complex-valued deep neural networks and apply them to convolutional feed-forward networks. More precisely, we rely on complex convolutions and present algorithms for complex batch-normalization, complex weight initialization strategies for complex-valued neural nets and we use them in experiments with end-to-end training schemes. We demonstrate that such complex-valued models are competitive with their real-valued counterparts. We test deep complex models on several computer vision tasks, on music transcription using the MusicNet dataset and on Speech spectrum prediction using TIMIT. We achieve state-of-the-art performance on these audio-related tasks.