Portrait of Audrey Durand

Audrey Durand

Associate Academic Member
Canada CIFAR AI Chair
Assistant Professor, Université Laval, Department of Computer Science and Software Engineering
Research Topics
Online Learning
Reinforcement Learning

Biography

Audrey Durand is an assistant professor in the Department of Computer Science and Software Engineering and in the Department of Electrical and Computer Engineering at Université Laval.

She specializes in algorithms that learn through interaction with their environment using reinforcement learning, and is particularly interested in leveraging these approaches in health-related applications.

Current Students

Master's Research - Université Laval
Master's Research - Université de Montréal
Principal supervisor :
PhD - Université Laval
Master's Research - Université Laval
PhD - Université Laval
Master's Research - Université Laval
Master's Research - Université Laval
PhD - Université Laval

Publications

Nested-ReFT: Efficient Reinforcement Learning for Large Language Model Fine-Tuning via Off-Policy Rollouts
Yufei CUI
Boxing Chen
Prasanna Parthasarathi
A Guide to Robust Generalization: The Impact of Architecture, Pre-training, and Optimization Strategy
Deep learning models operating in the image domain are vulnerable to small input perturbations. For years, robustness to such perturbations … (see more)was pursued by training models from scratch (i.e., with random initializations) using specialized loss objectives. Recently, robust fine-tuning has emerged as a more efficient alternative: instead of training from scratch, pretrained models are adapted to maximize predictive performance and robustness. To conduct robust fine-tuning, practitioners design an optimization strategy that includes the model update protocol (e.g., full or partial) and the specialized loss objective. Additional design choices include the architecture type and size, and the pretrained representation. These design choices affect robust generalization, which is the model's ability to maintain performance when exposed to new and unseen perturbations at test time. Understanding how these design choices influence generalization remains an open question with significant practical implications. In response, we present an empirical study spanning 6 datasets, 40 pretrained architectures, 2 specialized losses, and 3 adaptation protocols, yielding 1,440 training configurations and 7,200 robustness measurements across five perturbation types. To our knowledge, this is the most diverse and comprehensive benchmark of robust fine-tuning to date. While attention-based architectures and robust pretrained representations are increasingly popular, we find that convolutional neural networks pretrained in a supervised manner on large datasets often perform best. Our analysis both confirms and challenges prior design assumptions, highlighting promising research directions and offering practical guidance.
On the Fundamental Limitations of Dual Static CVaR Decompositions in Markov Decision Processes
Mathieu Godbout
Multi-Agent Matrix Games with Individual learners: How Exploration-Exploitation Strategies Impact the Emergence of Coordination
Julien Armand
Tommy Chien-Hsuan Lin
Coordination between independent learning agents in a multi-agent environment is an important problem where AI systems may impact each other… (see more)s learning process. In this paper, we study how individual agents converge to optimal equilibrium in multi-agent where coordination is necessary to achieve optimality. Specifically, we cover the case of coordination to maximize every individual payoffs and coordination to maximize the collective payoff (cooperation). We study the emergence of such coordination behaviours in two-players matrix games with unknown payoff matrices and noisy bandit feedback. We consider five different environments along with widely used deterministic and stochastic bandit strategies. We study how different learning strategies and observation noise influence convergence to the optimal equilibrium. Our results indicate that coordination often emerge more easily from interactions between deterministic agents, especially when they follow the same learning behaviour. However, stochastic learning strategies appear to be more robust in the presence of many optimal joint actions. Overall, noisy observations often help stabilizing learning behaviours.
Human-AI Alignment of Learning Trajectories in Video Games: a continual RL benchmark proposal
Yann Harel
François Paugam
We propose a design for a continual reinforcement learning (CRL) benchmark called GHAIA, centered on human-AI alignment of learning trajecto… (see more)ries in structured video game environments. Using \textit{Super Mario Bros.} as a case study, gameplay is decomposed into short, annotated scenes organized into diverse task sequences based on gameplay patterns and difficulty. Evaluation protocols measure both plasticity and stability, with flexible revisit and pacing schedules. A key innovation is the inclusion of high-resolution human gameplay data collected under controlled conditions, enabling direct comparison of human and agent learning. In addition to adapting classical CRL metrics like forgetting and backward transfer, we introduce semantic transfer metrics capturing learning over groups of scenes sharing similar game patterns. We demonstrate the feasibility of our approach on human and agent data, and discuss key aspects of the first release for community input.
Human-AI Alignment of Learning Trajectories in Video Games: a continual RL benchmark proposal
Yann Harel
François Paugam
We propose a design for a continual reinforcement learning (CRL) benchmark called GHAIA, centered on human-AI alignment of learning trajecto… (see more)ries in structured video game environments. Using \textit{Super Mario Bros.} as a case study, gameplay is decomposed into short, annotated scenes organized into diverse task sequences based on gameplay patterns and difficulty. Evaluation protocols measure both plasticity and stability, with flexible revisit and pacing schedules. A key innovation is the inclusion of high-resolution human gameplay data collected under controlled conditions, enabling direct comparison of human and agent learning. In addition to adapting classical CRL metrics like forgetting and backward transfer, we introduce semantic transfer metrics capturing learning over groups of scenes sharing similar game patterns. We demonstrate the feasibility of our approach on human and agent data, and discuss key aspects of the first release for community input.
Optimal discounting for offline input-driven MDP
Offline reinforcement learning has gained a lot of popularity for its potential to solve industry challenges. However, real-world environmen… (see more)ts are often highly stochastic and partially observable, leading long-term planners to overfit to offline data in model-based settings. Input-driven Markov Decision Processes (IDMDPs) offer a way to work with some of the uncertainty by letting designers separate what the agent has control over (states) from what it cannot (inputs) in the environnement. These stochastic external inputs are often difficult to model. Under the assumption that the input model will be imperfect, we investigate the bias-variance tradeoff under shallow planning in IDMDPs. Paving the way to input-driven planning horizons, we also investigate the similarity of optimal planning horizons at different inputs given the structure of the input space.
Optimal discounting for offline input-driven MDP
Offline reinforcement learning has gained a lot of popularity for its potential to solve industry challenges. However, real-world environmen… (see more)ts are often highly stochastic and partially observable, leading long-term planners to overfit to offline data in model-based settings. Input-driven Markov Decision Processes (IDMDPs) offer a way to work with some of the uncertainty by letting designers separate what the agent has control over (states) from what it cannot (inputs) in the environnement. These stochastic external inputs are often difficult to model. Under the assumption that the input model will be imperfect, we investigate the bias-variance tradeoff under shallow planning in IDMDPs. Paving the way to input-driven planning horizons, we also investigate the similarity of optimal planning horizons at different inputs given the structure of the input space.
Platform-based Adaptive Experimental Research in Education: Lessons Learned from The Digital Learning Challenge
Ilya Musabirov
Mohi Reza
Haochen Song
Steven Moore
Pan Chen
Harsh Kumar
Tong Li
John Stamper
Norman Bier
Anna Rafferty
Thomas Price
Nina Deliu
Michael Liut
Joseph Jay Williams
: We report on our experience with a real-world, multi-experimental evaluation of an adaptive experimentation platform within the XPRIZE Dig… (see more)ital Learning Challenge framework. We showcase how EASI (Experiment as a Service) cross-platform software supports quick integration and deployment of adaptive experiments as well as five systematic replications within a 30-day timeframe. The outline the key scenarios of the applicability of platform-supported experiments and reflect on lessons learned from this two-year project that can help researchers and practitioners to integrate adaptive experiments in real-world courses
Adaptive Experiments Under Data Sparse Settings: Applications for Educational Platforms
Haochen Song
Ilya Musabirov
Ananya Bhattacharjee
Meredith Franklin
Anna Rafferty
Joseph Jay Williams
Adaptive experimentation is increasingly used in educational platforms to personalize learning through dynamic content and feedback. However… (see more), standard adaptive strategies such as Thompson Sampling often underperform in real-world educational settings where content variations are numerous and student participation is limited, resulting in sparse data. In particular, Thompson Sampling can lead to imbalanced content allocation and delayed convergence on which aspects of content are most effective for student learning. To address these challenges, we introduce Weighted Allocation Probability Adjusted Thompson Sampling (WAPTS), an algorithm that refines the sampling strategy to improve content-related decision-making in data-sparse environments. WAPTS is guided by the principle of lenient regret, allowing near-optimal allocations to accelerate learning while still exploring promising content. We evaluate WAPTS in a learnersourcing scenario where students rate peer-generated learning materials, and demonstrate that it enables earlier and more reliable identification of promising treatments.
Adaptive Experiments Under High-Dimensional and Data Sparse Settings: Applications for Educational Platforms
Haochen Song
Ilya Musabirov
Ananya Bhattacharjee
Meredith Franklin
Anna Rafferty
Joseph Jay Williams
Adaptive Experiments Under High-Dimensional and Data Sparse Settings: Applications for Educational Platforms
Haochen Song
Ilya Musabirov
Ananya Bhattacharjee
Meredith Franklin
Anna Rafferty
Joseph Jay Williams
In online educational platforms, adaptive experiment designs play a critical role in personalizing learning pathways, instructional sequenci… (see more)ng, and content recommendations. Traditional adaptive policies, such as Thompson Sampling, struggle with scalability in high-dimensional and sparse settings such as when there are large amount of treatments (arms) and limited resources such as funding and time to conduct to a classroom constraint student size. Furthermore, the issue of under-exploration in large-scale educational interventions can lead to suboptimal learning recommendations. To address these challenges, we build upon the concept of lenient regret, which tolerates limited suboptimal selections to enhance exploratory learning, and propose a framework for determining the feasible number of treatments given a sample size. We illustrate these ideas with a case study in online educational learnersourcing examples, where adaptive algorithms dynamically allocate peer-crafted interventions to other students under active recall exercise. Our proposed Weighted Allocation Probability Adjusted Thompson Sampling (WAPTS) algorithm enhances the efficiency of treatment allocation by adjusting sampling weights to balance exploration and exploitation in data-sparse environments. We present comparative evaluations of WAPTS across various sample sizes (N=50, 300, 1000) and treatment conditions, demonstrating its ability to mitigate under-exploration while optimizing learning outcomes.