Publications

Evaluating In-Context Learning of Libraries for Code Generation
Arkil Patel
Pradeep Dasigi
Immunotherapeutic targeting of surfaceome heterogeneity in AML.
Marie-Eve Bordeleau
Éric Audemard
Arnaud Metois
Louis Theret
Véronique Lisi
Azer Farah
Jean-Francois Spinella
Jalila Chagraoui
Ossama Moujaber
Léo Aubert
Banafsheh Khakipoor
Laure Mallinger
Isabel Boivin
Nadine Mayotte
Azadeh Hajmirza
Eric Bonneil
Francois Béliveau
Sybille Pfammatter
Albert Feghaly
Geneviève Boucher … (voir 9 de plus)
Patrick Gendron
Pierre Thibault
Frederic Barabe
Guillaume Richard-Carpentier
Josée Hébert
Vincent-Philippe Lavallee
Philippe Roux
Guy Sauvageau
Implementation of a Global Pediatric Trauma Course in an Upper Middle–Income Country: A Pilot Study
Abbie Naus
Madeleine Carroll
Ayla Gerk
David P. Mooney
Natalie L. Yanchar
Julia Ferreira
Karen E. Gripp
Caroline Ouellet
Fabio Botelho
"One-Size-Fits-All"? Examining Expectations around What Constitute"Fair"or"Good"NLG System Behaviors
Li Lucy
Su Lin Blodgett
Milad Shokouhi
Hanna Wallach
Fairness-related assumptions about what constitute appropriate NLG system behaviors range from invariance, where systems are expected to beh… (voir plus)ave identically for social groups, to adaptation, where behaviors should instead vary across them. To illuminate tensions around invariance and adaptation, we conduct five case studies, in which we perturb different types of identity-related language features (names, roles, locations, dialect, and style) in NLG system inputs. Through these cases studies, we examine people's expectations of system behaviors, and surface potential caveats of these contrasting yet commonly held assumptions. We find that motivations for adaptation include social norms, cultural differences, feature-specific information, and accommodation; in contrast, motivations for invariance include perspectives that favor prescriptivism, view adaptation as unnecessary or too difficult for NLG systems to do appropriately, and are wary of false assumptions. Our findings highlight open challenges around what constitute"fair"or"good"NLG system behaviors.
Pioneering women in nuclear and radiation sciences
Mirta Dumancic
A responsible framework for applying artificial intelligence on medical images and signals at the point-of-care: the PACS-AI platform.
Pascal Thériault-Lauzier
Denis Cobin
Olivier Tastet
Élodie Labrecque Langlais
B. Taji
Guson Kang
A. Chong
Derek So
An Tang
J. W. Gichoya
Pierre-Luc Deziel
Julie G. Hussin
Samuel Kadoury
Robert Avram
Revisiting the 2023 wildfire season in Canada
Flavie Pelletier
Michael A. Wulder
Joanne C. White
Txomin Hermosilla
Amortizing intractable inference in diffusion models for vision, language, and control
Siddarth Venkatraman
Moksh J. Jain
Luca Scimeca
Minsu Kim
Marcin Sendera
Mohsin Hasan
Luke Rowe
Sarthak Mittal
Pablo Lemos
Emmanuel Bengio
Alexandre Adam
Jarrid Rector-Brooks
Nikolay Malkin
Diffusion models have emerged as effective distribution estimators in vision, language, and reinforcement learning, but their use as priors … (voir plus)in downstream tasks poses an intractable posterior inference problem. This paper studies amortized sampling of the posterior over data,
$\mu$LO: Compute-Efficient Meta-Generalization of Learned Optimizers
Benjamin Thérien
Charles-Étienne Joseph
Boris Knyazev
Edouard Oyallon
How well do models of visual cortex generalize to out of distribution samples?
Yifei Ren
On shallow planning under partial observability
Randy Lefebvre
On the Costs and Benefits of Adopting Lifelong Learning for Software Analytics -- Empirical Study on Brown Build and Risk Prediction
Doriane Olewicki
Sarra Habchi
Mathieu Nayrolles
Mojtaba Faramarzi
Bram Adams
Nowadays, software analytics tools using machine learning (ML) models to, for example, predict the risk of a code change are well establishe… (voir plus)d. However, as the goals of a project shift over time, and developers and their habits change, the performance of said models tends to degrade (drift) over time. Current retraining practices typically require retraining a new model from scratch on a large updated dataset when performance decay is observed, thus incurring a computational cost; also there is no continuity between the models as the past model is discarded and ignored during the new model training. Even though the literature has taken interest in online learning approaches, those have rarely been integrated and evaluated in industrial environments. This paper evaluates the use of lifelong learning (LL) for industrial use cases at Ubisoft, evaluating both the performance and the required computational effort in comparison to the retraining-from-scratch approaches commonly used by the industry. LL is used to continuously build and maintain ML-based software analytics tools using an incremental learner that progressively updates the old model using new data. To avoid so-called"catastrophic forgetting"of important older data points, we adopt a replay buffer of older data, which still allows us to drastically reduce the size of the overall training dataset, and hence model training time.