Portrait of Audrey Durand

Audrey Durand

Associate Academic Member
Canada CIFAR AI Chair
Associate professor, Université Laval, Department of Computer Science and Software Engineering
Research Topics
AI for Science
Online Learning
Reinforcement Learning

Biography

Audrey Durand is an assistant professor in the Department of Computer Science and Software Engineering and in the Department of Electrical and Computer Engineering at Université Laval.

She specializes in algorithms that learn through interaction with their environment using reinforcement learning, and is particularly interested in leveraging these approaches in health-related applications.

Current Students

Master's Research - Université Laval
Master's Research - Université Laval
Master's Research - Université de Montréal
Principal supervisor :
PhD - Université Laval
Master's Research - Université Laval
PhD - Université Laval
PhD - Université Laval
PhD - Université Laval
Postdoctorate - Université Laval

Publications

Human-AI Alignment of Learning Trajectories in Video Games: a continual RL benchmark proposal
Yann Harel
Lune P Bellec
We propose a design for a continual reinforcement learning (CRL) benchmark called GHAIA, centered on human-AI alignment of learning trajecto… (see more)ries in structured video game environments. Using \textit{Super Mario Bros.} as a case study, gameplay is decomposed into short, annotated scenes organized into diverse task sequences based on gameplay patterns and difficulty. Evaluation protocols measure both plasticity and stability, with flexible revisit and pacing schedules. A key innovation is the inclusion of high-resolution human gameplay data collected under controlled conditions, enabling direct comparison of human and agent learning. In addition to adapting classical CRL metrics like forgetting and backward transfer, we introduce semantic transfer metrics capturing learning over groups of scenes sharing similar game patterns. We demonstrate the feasibility of our approach on human and agent data, and discuss key aspects of the first release for community input.
Optimal discounting for offline input-driven MDP
Offline reinforcement learning has gained a lot of popularity for its potential to solve industry challenges. However, real-world environmen… (see more)ts are often highly stochastic and partially observable, leading long-term planners to overfit to offline data in model-based settings. Input-driven Markov Decision Processes (IDMDPs) offer a way to work with some of the uncertainty by letting designers separate what the agent has control over (states) from what it cannot (inputs) in the environnement. These stochastic external inputs are often difficult to model. Under the assumption that the input model will be imperfect, we investigate the bias-variance tradeoff under shallow planning in IDMDPs. Paving the way to input-driven planning horizons, we also investigate the similarity of optimal planning horizons at different inputs given the structure of the input space.
Platform-based Adaptive Experimental Research in Education: Lessons Learned from The Digital Learning Challenge
Ilya Musabirov
Mohi Reza
Haochen Song
Steven Moore
Pan Chen
Harsh Kumar
Tong Li
John Stamper
Norman Bier
Anna Rafferty
Thomas Price
Nina Deliu
Michael Liut
Joseph Jay Williams
: We report on our experience with a real-world, multi-experimental evaluation of an adaptive experimentation platform within the XPRIZE Dig… (see more)ital Learning Challenge framework. We showcase how EASI (Experiment as a Service) cross-platform software supports quick integration and deployment of adaptive experiments as well as five systematic replications within a 30-day timeframe. The outline the key scenarios of the applicability of platform-supported experiments and reflect on lessons learned from this two-year project that can help researchers and practitioners to integrate adaptive experiments in real-world courses
Adaptive Experiments Under Data Sparse Settings: Applications for Educational Platforms
Haochen Song
Ilya Musabirov
Ananya Bhattacharjee
Meredith Franklin
Anna Rafferty
Joseph Jay Williams
Adaptive experimentation is increasingly used in educational platforms to personalize learning through dynamic content and feedback. However… (see more), standard adaptive strategies such as Thompson Sampling often underperform in real-world educational settings where content variations are numerous and student participation is limited, resulting in sparse data. In particular, Thompson Sampling can lead to imbalanced content allocation and delayed convergence on which aspects of content are most effective for student learning. To address these challenges, we introduce Weighted Allocation Probability Adjusted Thompson Sampling (WAPTS), an algorithm that refines the sampling strategy to improve content-related decision-making in data-sparse environments. WAPTS is guided by the principle of lenient regret, allowing near-optimal allocations to accelerate learning while still exploring promising content. We evaluate WAPTS in a learnersourcing scenario where students rate peer-generated learning materials, and demonstrate that it enables earlier and more reliable identification of promising treatments.
Robust Fine-Tuning from Non-Robust Pretrained Models: Mitigating Suboptimal Transfer With Adversarial Scheduling
Fine-tuning pretrained models is a standard and effective workflow in modern machine learning. However, robust fine-tuning (RFT), which aims… (see more) to simultaneously achieve adaptation to a downstream task and robustness to adversarial examples, remains challenging. Despite the abundance of non-robust pretrained models in open-source repositories, their potential for RFT is less understood. We address this knowledge gap by systematically examining RFT from such non-robust models. Our experiments reveal that fine-tuning non-robust models with a robust objective, even under small perturbations, can lead to poor performance, a phenomenon that we dub \emph{suboptimal transfer}. In challenging scenarios (eg, difficult tasks, high perturbation), the resulting performance can be so low that it may be considered a transfer failure. We find that fine-tuning using a robust objective impedes task adaptation at the beginning of training and eventually prevents optimal transfer. However, we propose a novel heuristic, \emph{Epsilon-Scheduling}, a schedule over perturbation strength used during training that promotes optimal transfer. Additionally, we introduce \emph{expected robustness}, a metric that captures performance across a range of perturbations, providing a more comprehensive evaluation of the accuracy-robustness trade-off for diverse models at test time. Extensive experiments on a wide range of configurations (six pretrained models and five datasets) show that \emph{Epsilon-Scheduling} successfully prevents \emph{suboptimal transfer} and consistently improves expected robustness.
Randomized Confidence Bounds for Stochastic Partial Monitoring
The partial monitoring (PM) framework provides a theoretical formulation of sequential learning problems with incomplete feedback. On each r… (see more)ound, a learning agent plays an action while the environment simultaneously chooses an outcome. The agent then observes a feedback signal that is only partially informative about the (unobserved) outcome. The agent leverages the received feedback signals to select actions that minimize the (unobserved) cumulative loss. In contextual PM, the outcomes depend on some side information that is observable by the agent before selecting the action on each round. In this paper, we consider the contextual and non-contextual PM settings with stochastic outcomes. We introduce a new class of PM strategies based on the randomization of deterministic confidence bounds. We also extend regret guarantees to settings where existing stochastic strategies are not applicable. Our experiments show that the proposed RandCBP and RandCBPsidestar strategies have favorable performance against state-of-the-art baselines in multiple PM games. To advocate for the adoption of the PM framework, we design a use case on the real-world problem of monitoring the error rate of any deployed classification system.
Data harmonization for Advancing research on Personalized Rehabilitation Interventions for Patients with Traumatic Brain Injury and Stroke: A proof of concept
Dorra Rakia Allegue
Despoina Petsani
Nathalie Ponthon
Evdokimos Konstantinidis
Panagiotis Bamidis
Eva Kehayia
Sara Ahmed
Stroke and traumatic brain injury (TBI) are leading causes of morbidity and mortality, affecting survivors’ mobility and social participat… (see more)ion. Although personalized interventions could positively impact survivors' recovery, the effectiveness of such interventions remains unclear. Open-access data repositories can provide access to multiple shared data which could help uncover new evidence of effective interventions; however, harmonizing data between different studies requires many steps to make it possible given the various methods of data collection, intervention characteristics and population sociodemographic profile. This proof-of-concept study aimed to describe the steps and anchors that contributed to the development of guiding frameworks to harmonize data across different studies. Data were extracted from the Federal Interagency Traumatic Brain Injury Research (FITBIR) repository and stored on an online cloud platform. The outcome measures were mapped to mobility determinants using the International Classification of Functioning, Disability, and Health (ICF) and Webber framework. The intervention's effect was categorized according to the Minimal Clinically Important Difference (MCID)s of the measures administered. The study proposed a novel framework for intervention features, which aims to enhance our understanding of the mechanisms of action and potential impact of rehabilitation interventions. The framework classified interventions based on their nature, context, specific body systems, dosage, caregiver assistance, and behaviour change strategies. In conclusion, this study demonstrated the feasibility of harmonizing data extracted from different sources in the FITBIR repository. Leveraging existing open databases offers tremendous opportunities to advance research on personalized interventions for patients with TBI and stroke and inform decision-making during transitions.
On shallow planning under partial observability
Neural Active Learning Meets the Partial Monitoring Framework
Knowledge by omission: the significance of omissions in the 5-choice serial reaction time task
Caroline Vouillac-Mendoza
Serge H. Ahmed
Karine Guillem
The 5-choice serial reaction time task (5-CSRTT) is commonly used to assess attention in rodents. Manipulation of this task by decreasing th… (see more)e light stimulus duration is often used to probe attentional capacity and causes a decrease in accuracy and an increase in omissions. However, although a decrease in response accuracy is commonly interpreted as a decrease in attention, it is more difficult to interpret an increase in omissions in terms of attentional performance. Here we present a series of experiments in rats that seeks to investigate the origins of these key behavioral measures of attention in the 5-CSRTT. After an initial training in the 5-CSRTT, rats were tested in a variable stimulus duration procedure to increase task difficulty and probe visual attentional capacity under several specific controlled conditions. We found that response accuracy reflects visuospatial sustained attentional processing, as commonly interpreted, while response omission reflects rats’ ignorance about the stimulus location, presumably due to failure to pay attention to the curved wall during its presentation. Moreover, when rats lack of relevant information, they choose not to respond instead of responding randomly. Overall, our results indicate that response accuracy and response omission thus correspond to two distinct attentional states.
Deep reinforcement learning for continuous wood drying production line control
François-Alexandre Tremblay
Philippe Marier
Jonathan Gaudreault
Development of AI-assisted microscopy frameworks through realistic simulation with pySTED
Anthony Bilodeau
Albert Michaud-Gagnon
Julia Chabbert
Benoit Turcotte
Jörn Heine
The integration of artificial intelligence into microscopy systems significantly enhances performance, optimizing both image acquisition and… (see more) analysis phases. Development of artificial intelligence-assisted super-resolution microscopy is often limited by access to large biological datasets, as well as by difficulties to benchmark and compare approaches on heterogeneous samples. We demonstrate the benefits of a realistic stimulated emission depletion microscopy simulation platform, pySTED, for the development and deployment of artificial intelligence strategies for super-resolution microscopy. pySTED integrates theoretically and empirically validated models for photobleaching and point spread function generation in stimulated emission depletion microscopy, as well as simulating realistic point-scanning dynamics and using a deep learning model to replicate the underlying structures of real images. This simulation environment can be used for data augmentation to train deep neural networks, for the development of online optimization strategies and to train reinforcement learning models. Using pySTED as a training environment allows the reinforcement learning models to bridge the gap between simulation and reality, as showcased by its successful deployment on a real microscope system without fine tuning.