Portrait of Aishwarya Agrawal

Aishwarya Agrawal

Core Academic Member
Canada CIFAR AI Chair
Assistant Professor, Université de Montréal, Department of Computer Science and Operations Research
Research Scientist, Google DeepMind, Montréal
Research Topics
Computer Vision
Deep Learning
Multimodal Learning
Natural Language Processing

Biography

Aishwarya Agrawal is an assistant professor in the Department of Computer Science and Operations Research at Université de Montréal, a Canada CIFAR AI Chair, and a core academic member of Mila – Quebec Artificial Intelligence Institute.

Agrawal also works as a research scientist one day a week at DeepMind. Previously, she held this position full time (August 2019 to December 2020). She completed her PhD in August 2019 at Georgia Tech, where she worked with Dhruv Batra and Devi Parikh.

Her research interests lie at the intersection of the following sub-disciplines of AI: computer vision, deep learning and natural language processing. The focus is developing AI systems that can ‘see’ (i.e., understand the contents of an image: who, what, where, doing what?) and ‘talk’ (i.e., communicate the understanding to humans in free-form natural language).

Aishwarya has received many awards and scholarships: Georgia Tech 2020 Sigma Xi Best PhD Thesis Award, 2020 Georgia Tech College of Computing Dissertation Award, 2019 Google Fellowship (declined due to graduation), 2019–2020 Facebook Fellowship (declined due to graduation) and 2018–2019 NVIDIA Graduate Fellowship. She was one of two runners-up in the 2019 AAAI/ACM SIGAI Dissertation Award, and was selected as a 2018 Rising Star in EECS.

She holds a bachelor's degree in electrical engineering with a minor in computer science and engineering from the Indian Institute of Technology Gandhinagar (2014).

Current Students

Collaborating researcher - Université de Montréal
PhD - Université de Montréal
Collaborating researcher - University of British Columbia
Master's Research - McGill University
Principal supervisor :
PhD - Université de Montréal
Collaborating researcher - Korea University
Independent visiting researcher - Michigan State University
Master's Research - Université de Montréal
PhD - Université de Montréal
Collaborating researcher - International Institute of Information Technology
PhD - Université de Montréal
Professional Master's - Université de Montréal
Master's Research - Université de Montréal
Professional Master's - Université de Montréal
PhD - Université de Montréal
PhD - Université de Montréal

Publications

Enhancing Multi-Agent Multi-Modal Collaboration with Fine-Grained Reward Modeling
Multi-Modal Large Language Models (MLLMs) have significantly advanced multi-modal reasoning but still struggle with compositional reasoning … (see more)tasks. Multi-agent collaboration provides a promising solution by leveraging the distinct capabilities of different agents. Specifically, a decomposer agent to handle task breakdown and an answerer agent to generate responses. While there have been efforts to adaptively decompose tasks based on the answerer agent's capabilities, such as using in-context learning, these methods often prove insufficient for fully effective decomposition. We address this issue by enhancing collaboration through fine-grained reward modeling, where each generated sub-question is assigned a specialized reward without requiring extra annotation or tuning of a reward model. Our proposed method dynamically optimizes the decomposition process, enabling better alignment between agents. Experimental results on four vision-language tasks demonstrate consistent improvements, with a 5.5\% absolute increase in mean performance over traditional approaches. These findings highlight the efficacy of fine-grained reward modeling for enhancing multi-agent, multi-modal collaboration.
Visual Language Alignment Tuning
VisMin: Visual Minimal-Change Understanding
Fine-grained understanding of objects, attributes, and relationships between objects is crucial for visual-language models (VLMs). Existing … (see more)benchmarks primarily focus on evaluating VLMs' capability to distinguish between two very similar captions given an image. In this paper, we introduce a new, challenging benchmark termed Visual Minimal-Change Understanding (VisMin), which requires models to predict the correct image-caption match given two images and two captions. The image pair and caption pair contain minimal changes, i.e., only one aspect changes at a time from among the following: object, attribute, count, and spatial relation. These changes test the models' understanding of objects, attributes (such as color, material, shape), counts, and spatial relationships between objects. We built an automatic framework using large language models and diffusion models, followed by a rigorous 4-step verification process by human annotators. Empirical experiments reveal that current VLMs exhibit notable deficiencies in understanding spatial relationships and counting abilities. We also generate a large-scale training dataset to finetune CLIP and Idefics2, showing significant improvements in fine-grained understanding across benchmarks and in CLIP's general image-text alignment. We release all resources, including the benchmark, training data, and finetuned model checkpoints, at https://vismin.net/.
Decompose and Compare Consistency: Measuring VLMs' Answer Reliability via Task-Decomposition Consistency Comparison
Despite tremendous advancements, current state-of-the-art Vision-Language Models (VLMs) are still far from perfect. They tend to hallucinate… (see more) and may generate biased responses. In such circumstances, having a way to assess the reliability of a given response generated by a VLM is quite useful. Existing methods, such as estimating uncertainty using answer likelihoods or prompt-based confidence generation, often suffer from overconfidence. Other methods use self-consistency comparison but are affected by confirmation biases. To alleviate these, we propose \textbf{De}compose and \textbf{C}ompare \textbf{C}onsistency (\texttt{DeCC}) for reliability measurement. By comparing the consistency between the direct answer generated using the VLM's internal reasoning process, and the indirect answers obtained by decomposing the question into sub-questions and reasoning over the sub-answers produced by the VLM, \texttt{DeCC} measures the reliability of VLM's direct answer. Experiments across six vision-language tasks with three VLMs show \texttt{DeCC}'s reliability estimation achieves better correlation with task accuracy compared to the existing methods.
An Introduction to Vision-Language Modeling
Richard Yuanzhe Pang
Anurag Ajay
Alexander C. Li
Adrien Bardes
Suzanne Petryk
Zhiqiu Lin
Anas Mahmoud
Bargav Jayaraman
Mark Ibrahim
Melissa Hall
Yunyang Xiong
Candace Ross
Srihari Jayakumar
Chuan Guo
Diane Bouchacourt
Haider Al-Tahan
Karthik Padthe … (see 22 more)
Vasu Sharma
Huijuan Xu 0001
Hu Xu
Xiaoqing Ellen Tan
Megan Richards
Samuel Lavoie
Pietro Astolfi
Jun Chen
Kushal Tirumala
Mazda Moayeri
Arjang Talattof
Kamalika Chaudhuri
Zechun Liu
Xilun Chen
Quentin Garrido
Karen Ullrich
Kate Saenko
Asli Celikyilmaz
Vikas Chandra
Improving Automatic VQA Evaluation Using Large Language Models
8 years after the visual question answering (VQA) task was proposed, accuracy remains the primary metric for automatic evaluation. VQA Accur… (see more)acy has been effective so far in the IID evaluation setting. However, our community is undergoing a shift towards open-ended generative models and OOD evaluation. In this new paradigm, the existing VQA Accuracy metric is overly stringent and underestimates the performance of VQA systems. Thus, there is a need to develop more robust automatic VQA metrics that serve as a proxy for human judgment. In this work, we propose to leverage the in-context learning capabilities of instruction-tuned large language models (LLMs) to build a better VQA metric. We formulate VQA evaluation as an answer-rating task where the LLM is instructed to score the accuracy of a candidate answer given a set of reference answers. We demonstrate the proposed metric better correlates with human judgment compared to existing metrics across several VQA models and benchmarks. We hope wide adoption of our metric will contribute to better estimating the research progress on the VQA task. We plan to release the evaluation code and collected human judgments.
Molar Pregnancy in a Quadruplet Conception Following IVF: A Case Report
Madhuri A Mehendale
Meenal Shailesh Sarmalkar
Prerna Kailashchand Gupta
Agraj S Doshi
Recent Excavation of Nanoethosomes in Current Drug Delivery.
Sankha Bhattacharya
Aalind Joshi
In the current era, the Transdermal delivery of bioactive molecules has become an area of research interest. The transdermal route of admini… (see more)stration enables direct entry of bioactive molecules into the systemic circulation with better and easy accessibility, bypassing the hepatic metabolism and improving patient compliance. Permeation through the skin has always been a barrier. To overcome this challenge, an efficient route by the vesicular system has been adopted so as to have better skin permeation of the bioactive molecules. A novel vesicular and non-invasive drug delivery system called Nanoethosomes was developed. Nanoethosomes are lipid-based vesicular carriers that are used for deeper permeation of the bioactive agents into the skin. The main components of Nanoethosomes are Phospholipids, water, and ethanol. High ethanol concentration in Nanoethosomes distinguishes them from other nano-formulation and results in deeper permeation and smaller vesicular size. This review article gives detailed information on the formulation techniques, and characterization parameters of nanoethosomes along with the research work done by various researchers in the same field. The compiled manuscript gives detailed elaboration about the various drugs used to treat different diseases which when incorporated in nanoethosomes resulted in better permeability and enhanced bioavailability.
Benchmarking Vision Language Models for Cultural Understanding
Sjoerd van Steenkiste
Lisa Anne Hendricks
Karolina Stanczak
Foundation models and vision-language pre-training have notably advanced Vision Language Models (VLMs), enabling multimodal processing of vi… (see more)sual and linguistic data. However, their performance has been typically assessed on general scene understanding - recognizing objects, attributes, and actions - rather than cultural comprehension. This study introduces CulturalVQA, a visual question-answering benchmark aimed at assessing VLM's geo-diverse cultural understanding. We curate a collection of 2,378 image-question pairs with 1-5 answers per question representing cultures from 11 countries across 5 continents. The questions probe understanding of various facets of culture such as clothing, food, drinks, rituals, and traditions. Benchmarking VLMs on CulturalVQA, including GPT-4V and Gemini, reveals disparity in their level of cultural understanding across regions, with strong cultural understanding capabilities for North America while significantly lower performance for Africa. We observe disparity in their performance across cultural facets too, with clothing, rituals, and traditions seeing higher performances than food and drink. These disparities help us identify areas where VLMs lack cultural understanding and demonstrate the potential of CulturalVQA as a comprehensive evaluation set for gauging VLM progress in understanding diverse cultures.
An Examination of the Robustness of Reference-Free Image Captioning Evaluation Metrics
MOQAGPT: Zero-Shot Multi-modal Open-domain Question Answering with Large Language Models
Yihong Wu
Fengran Mo
Jian-Yun Nie
Multi-modal open-domain question answering typically requires evidence retrieval from databases across diverse modalities, such as images, t… (see more)ables, passages, etc. Even Large Language Models (LLMs) like GPT-4 fall short in this task. To enable LLMs to tackle the task in a zero-shot manner, we introduce MoqaGPT, a straightforward and flexible framework. Using a divide-and-conquer strategy that bypasses intricate multi-modality ranking, our framework can accommodate new modalities and seamlessly transition to new models for the task. Built upon LLMs, MoqaGPT retrieves and extracts answers from each modality separately, then fuses this multi-modal information using LLMs to produce a final answer. Our methodology boosts performance on the MMCoQA dataset, improving F1 by +37.91 points and EM by +34.07 points over the supervised baseline. On the MultiModalQA dataset, MoqaGPT surpasses the zero-shot baseline, improving F1 by 9.5 points and EM by 10.1 points, and significantly closes the gap with supervised methods. Our codebase is available at https://github.com/lezhang7/MOQAGPT.
Investigating Prompting Techniques for Zero- and Few-Shot Visual Question Answering
In this paper, we explore effective prompting techniques to enhance zero- and few-shot Visual Question Answering (VQA) performance in contem… (see more)porary Vision-Language Models (VLMs). Central to our investigation is the role of question templates in guiding VLMs to generate accurate answers. We identify that specific templates significantly influence VQA outcomes, underscoring the need for strategic template selection. Another pivotal aspect of our study is augmenting VLMs with image captions, providing them with additional visual cues alongside direct image features in VQA tasks. Surprisingly, this augmentation significantly improves the VLMs' performance in many cases, even though VLMs"see"the image directly! We explore chain-of-thought (CoT) reasoning and find that while standard CoT reasoning causes drops in performance, advanced methods like self-consistency can help recover it. Furthermore, we find that text-only few-shot examples enhance VLMs' alignment with the task format, particularly benefiting models prone to verbose zero-shot answers. Lastly, to mitigate the challenges associated with evaluating free-form open-ended VQA responses using string-matching based VQA metrics, we introduce a straightforward LLM-guided pre-processing technique to adapt the model responses to the expected ground-truth answer distribution. In summary, our research sheds light on the intricacies of prompting strategies in VLMs for VQA, emphasizing the synergistic use of captions, templates, and pre-processing to enhance model efficacy.