Portrait of Eva Portelance

Eva Portelance

Associate Academic Member
Assistant Professor, HEC Montréal, Department of Decision Sciences
IVADOLabs
Research Topics
Cognitive Science
Natural Language Processing

Biography

I'm an Assistant Professor of Machine Learning in the Department of Decision Sciences at HEC Montréal. I'm also an Associate Academic member at Mila - Quebec Artificial Intelligence Institute.

My research intersects AI and Cognitive Science. I'm interested in studying how both humans and machines learn to understand language and reason about complex problems.

Before joining HEC Montréal, I was a postdoctoral fellow at Mila and McGill University’s NLP Group working with Timothy O’Donnell and Siva Reddy.

I completed my PhD in computational/cognitive linguistics at Stanford University, co-advised by professors Dan Jurafsky and Mike C. Frank, as part of the Stanford NLP group and the Stanford Language and Cognition Lab. I'm an interdisciplinarian at heart with a knack for hard problems.

Publications

Learning Action and Reasoning-Centric Image Editing from Videos and Simulation
Benno Krojer
Dheeraj Vattikonda
Luis Lara
Varun Jampani
Learning Action and Reasoning-Centric Image Editing from Videos and Simulations
Benno Krojer
Dheeraj Vattikonda
Luis Lara
Varun Jampani
An image editing model should be able to perform diverse edits, ranging from object replacement, changing attributes or style, to performing… (see more) actions or movement, which require many forms of reasoning. Current general instruction-guided editing models have significant shortcomings with action and reasoning-centric edits. Object, attribute or stylistic changes can be learned from visually static datasets. On the other hand, high-quality data for action and reasoning-centric edits is scarce and has to come from entirely different sources that cover e.g. physical dynamics, temporality and spatial reasoning. To this end, we meticulously curate the AURORA Dataset (Action-Reasoning-Object-Attribute), a collection of high-quality training data, human-annotated and curated from videos and simulation engines. We focus on a key aspect of quality training data: triplets (source image, prompt, target image) contain a single meaningful visual change described by the prompt, i.e., truly minimal changes between source and target images. To demonstrate the value of our dataset, we evaluate an AURORA-finetuned model on a new expert-curated benchmark (AURORA-Bench) covering 8 diverse editing tasks. Our model significantly outperforms previous editing models as judged by human raters. For automatic evaluations, we find important flaws in previous metrics and caution their use for semantically hard editing tasks. Instead, we propose a new automatic metric that focuses on discriminative understanding. We hope that our efforts : (1) curating a quality training dataset and an evaluation benchmark, (2) developing critical evaluations, and (3) releasing a state-of-the-art model, will fuel further progress on general image editing.
Reframing linguistic bootstrapping as joint inference using visually-grounded grammar induction models
Timothy John O'donnell
Semantic and syntactic bootstrapping posit that children use their prior knowledge of one linguistic domain, say syntactic relations, to hel… (see more)p later acquire another, such as the meanings of new words. Empirical results supporting both theories may tempt us to believe that these are different learning strategies, where one may precede the other. Here, we argue that they are instead both contingent on a more general learning strategy for language acquisition: joint learning. Using a series of neural visually-grounded grammar induction models, we demonstrate that both syntactic and semantic bootstrapping effects are strongest when syntax and semantics are learnt simultaneously. Joint learning results in better grammar induction, realistic lexical category learning, and better interpretations of novel sentence and verb meanings. Joint learning makes language acquisition easier for learners by mutually constraining the hypotheses spaces for both syntax and semantics. Studying the dynamics of joint inference over many input sources and modalities represents an important new direction for language modeling and learning research in both cognitive sciences and AI, as it may help us explain how language can be acquired in more constrained learning settings.