Portrait de Eva Portelance

Eva Portelance

Membre académique associé
Professeure adjointe, HEC Montréal, Département des sciences de la décision
IVADOLabs
Sujets de recherche
Science cognitive
Traitement du langage naturel

Biographie

Je suis professeure adjointe en apprentissage automatique au département des sciences de la décision à HEC Montréal. Je suis également membre académique associé à Mila - Institut d'intelligence artificielle du Québec.

Mes recherches croisent l'IA et les sciences cognitives. Je m'intéresse à la façon dont les humains et les machines apprennent à comprendre le langage et à raisonner sur des problèmes complexes.

Avant de me joindre à HEC Montréal, j'ai été chercheuse postdoctorala à Mila et à l'Université McGill dans le groupe NLP, où j'ai travaillé avec Timothy O'Donnell et Siva Reddy.

J'ai obtenu mon doctorat en linguistique computationnelle/cognitive à l'Université Stanford, sous la direction des professeurs Dan Jurafsky et Mike C. Frank, dans le cadre du Stanford NLP group et du Stanford Language and Cognition Lab. Je suis une interdisciplinaire dans l'âme, douée pour résoudre des problèmes complexes.

Publications

Learning Action and Reasoning-Centric Image Editing from Videos and Simulation
Benno Krojer
Dheeraj Vattikonda
Luis Lara
Varun Jampani
Learning Action and Reasoning-Centric Image Editing from Videos and Simulations
Benno Krojer
Dheeraj Vattikonda
Luis Lara
Varun Jampani
An image editing model should be able to perform diverse edits, ranging from object replacement, changing attributes or style, to performing… (voir plus) actions or movement, which require many forms of reasoning. Current general instruction-guided editing models have significant shortcomings with action and reasoning-centric edits. Object, attribute or stylistic changes can be learned from visually static datasets. On the other hand, high-quality data for action and reasoning-centric edits is scarce and has to come from entirely different sources that cover e.g. physical dynamics, temporality and spatial reasoning. To this end, we meticulously curate the AURORA Dataset (Action-Reasoning-Object-Attribute), a collection of high-quality training data, human-annotated and curated from videos and simulation engines. We focus on a key aspect of quality training data: triplets (source image, prompt, target image) contain a single meaningful visual change described by the prompt, i.e., truly minimal changes between source and target images. To demonstrate the value of our dataset, we evaluate an AURORA-finetuned model on a new expert-curated benchmark (AURORA-Bench) covering 8 diverse editing tasks. Our model significantly outperforms previous editing models as judged by human raters. For automatic evaluations, we find important flaws in previous metrics and caution their use for semantically hard editing tasks. Instead, we propose a new automatic metric that focuses on discriminative understanding. We hope that our efforts : (1) curating a quality training dataset and an evaluation benchmark, (2) developing critical evaluations, and (3) releasing a state-of-the-art model, will fuel further progress on general image editing.
Reframing linguistic bootstrapping as joint inference using visually-grounded grammar induction models
Timothy John O'donnell
Semantic and syntactic bootstrapping posit that children use their prior knowledge of one linguistic domain, say syntactic relations, to hel… (voir plus)p later acquire another, such as the meanings of new words. Empirical results supporting both theories may tempt us to believe that these are different learning strategies, where one may precede the other. Here, we argue that they are instead both contingent on a more general learning strategy for language acquisition: joint learning. Using a series of neural visually-grounded grammar induction models, we demonstrate that both syntactic and semantic bootstrapping effects are strongest when syntax and semantics are learnt simultaneously. Joint learning results in better grammar induction, realistic lexical category learning, and better interpretations of novel sentence and verb meanings. Joint learning makes language acquisition easier for learners by mutually constraining the hypotheses spaces for both syntax and semantics. Studying the dynamics of joint inference over many input sources and modalities represents an important new direction for language modeling and learning research in both cognitive sciences and AI, as it may help us explain how language can be acquired in more constrained learning settings.