Portrait of Laurence Perreault-Levasseur is unavailable

Laurence Perreault-Levasseur

Associate Academic Member
Assistant Professor, Université de Montréal, Department of Physics
Research Topics
Computer Vision
Deep Learning
Dynamical Systems
Generative Models
Graph Neural Networks
Probabilistic Models

Biography

Laurence Perreault-Levasseur is the Canada Research Chair in Computational Cosmology and Artificial Intelligence. She is an assistant professor at Université de Montréal and an associate academic member of Mila – Quebec Artificial Intelligence Institute. Perreault-Levasseur’s research focuses on the development and application of machine learning methods to cosmology.

She is also a Visiting Scholar at the Flatiron Institute in New York City. Prior to that, she was a research fellow at their Center for Computational Astrophysics, and a KIPAC postdoctoral fellow at Stanford University.

For her PhD degree at the University of Cambridge, she worked on applications of open effective field theory methods to the formalism of inflation. She completed her BSc and MSc degrees at McGill University.

Current Students

PhD - Université de Montréal
PhD - McGill University
Principal supervisor :
PhD - Université de Montréal
Co-supervisor :
PhD - Université de Montréal
Principal supervisor :
Research Intern - Université de Montréal
Co-supervisor :
PhD - Université de Montréal
Co-supervisor :
PhD - Université de Montréal
Principal supervisor :
Postdoctorate - Université de Montréal
Co-supervisor :
PhD - Université de Montréal
PhD - Université de Montréal
Master's Research - Université de Montréal
Principal supervisor :
PhD - Université de Montréal
Co-supervisor :
PhD - Université de Montréal
Master's Research - Université de Montréal
Postdoctorate - Université de Montréal
Principal supervisor :
Postdoctorate - McGill University
Co-supervisor :
Postdoctorate - Université de Montréal
Co-supervisor :

Publications

Opportunities in AI/ML for the Rubin LSST Dark Energy Science Collaboration
LSST Dark Energy Science Collaboration
Eric Aubourg
Camille Avestruz
M. R. Becker
Biswajit Biswas
Rahul Biswas
Boris Bolliet
César Briceño
Clecio Bom
Raphaël Bonnet-Guerrini
Alexandre Boucaud
J.E. Campagne
Chihway Chang
Aleksandra Ćiprijanović
Johann Cohen-Tanugi
Michael W. Coughlin
John Franklin Crenshaw
Juan C. Cuevas‐Tello
Juan de Vicente
Seth William Digel … (see 50 more)
Steven Dillmann
Mariano Javier de León Dominguez Romero
Alex Drlica-Wagner
Sydney Erickson
Alexander Gagliano
Christos Georgiou
Aritra Ghosh
Matthew Grayling
Kirill A. Grishin
Alan Heavens
Lindsay R. House
Mustapha Ishak
Wassim Kabalan
Olivia Lynn
François Lanusse
C. Danielle Leonard
P.-F. Léget
Michelle Lochner
Joel Meyers
Peter Melchior
Grant Merz
Martin Millon
Anais Möller
G. Narayan
Yuuki Omori
Hiranya Peiris
A. A. Plazas
Nesar Ramachandra
B. Remy
C. Roucelle
Jaime Ruiz-Zapatero
Stefan Schuldt
I. Sevilla-Noarbe
Ved G. Shah
Tjitske Starkenburg
Stephen Thorp
Tianqing Zhang
Tilman Tröster
Roberto Trotta
Padma T. Venkatraman
A. R. Wasserman
Tim White
Tianqing Zhang
Yuanyuan Zhang
Adam S. Bolton
Arun Kannawadi
Yao-Yuan Mao
Laura Toribio San Cipriano
The Vera C. Rubin Observatory's Legacy Survey of Space and Time (LSST) will produce unprecedented volumes of heterogeneous astronomical data… (see more) (images, catalogs, and alerts) that challenge traditional analysis pipelines. The LSST Dark Energy Science Collaboration (DESC) aims to derive robust constraints on dark energy and dark matter from these data, requiring methods that are statistically powerful, scalable, and operationally reliable. Artificial intelligence and machine learning (AI/ML) are already embedded across DESC science workflows, from photometric redshifts and transient classification to weak lensing inference and cosmological simulations. Yet their utility for precision cosmology hinges on trustworthy uncertainty quantification, robustness to covariate shift and model misspecification, and reproducible integration within scientific pipelines. This white paper surveys the current landscape of AI/ML across DESC's primary cosmological probes and cross-cutting analyses, revealing that the same core methodologies and fundamental challenges recur across disparate science cases. Since progress on these cross-cutting challenges would benefit multiple probes simultaneously, we identify key methodological research priorities, including Bayesian inference at scale, physics-informed methods, validation frameworks, and active learning for discovery. With an eye on emerging techniques, we also explore the potential of the latest foundation model methodologies and LLM-driven agentic AI systems to reshape DESC workflows, provided their deployment is coupled with rigorous evaluation and governance. Finally, we discuss critical software, computing, data infrastructure, and human capital requirements for the successful deployment of these new methodologies, and consider associated risks and opportunities for broader coordination with external actors.
The 2nd Workshop on Foundation Models for Science: Real-World Impact and Science-First Design
Wuyang Chen
Yongji Wang
N. Benjamin Erichson
Bo Li
Damian Borth
Swarat Chaudhuri
Scientific foundation models should be built for science, not for generic AI tastes or leaderboard prestige. This workshop centers problem-d… (see more)riven design: models that measurably advance real scientific inquiries, e.g., forecasting extreme climate events, accelerating materials discovery, understanding biological mechanisms, co-developed with domain experts and validated against field data, experiments, and downstream impact. We argue that foundation models for science must be built differently from language and vision. Scientific data are physical, causal, spatiotemporal, and often scarce or biased; objectives must reflect mechanistic fidelity, not just predictive accuracy. This calls for scientific priors and constraints, robust uncertainty quantification (UQ), and architectures that natively handle multi-modality (e.g., grids, meshes, spectra, time series, point clouds, text, images, code). It also demands tight integration with classical scientific tools (simulators, PDE solvers, optimization and inference engines, and HPC workflows) to yield hybrid systems that are faster, more accurate, and more trustworthy. We will highlight opportunities and hard problems unique to science: enforcing conservation laws and symmetries; learning across vast spatial and temporal scales; representing extreme events and tipping points; calibrating and validating UQ; and developing evaluation protocols that reward mechanistic insight and actionable reliability. The goal is a roadmap for building, training, and deploying scientific foundation models that accelerate discovery while respecting the structure of the natural world.
Transformer Embeddings for Fast Microlensing Inference
Neural Deprojection of Galaxy Stellar Mass Profiles
M. J. Yantovski-Barth
Hengyue Zhang
Martin Bureau
We introduce a neural approach to dynamical modeling of galaxies that replaces traditional imaging-based deprojections with a differentiable… (see more) mapping. Specifically, we train a neural network to translate Nuker profile parameters into analytically deprojectable Multi Gaussian Expansion components, enabling physically realistic stellar mass models without requiring optical observations. We integrate this model into SuperMAGE, a differentiable dynamical modelling pipeline for Bayesian inference of supermassive black hole masses. Applied to ALMA data, our approach finds results consistent with state-of-the-art models while extending applicability to dust-obscured and active galaxies where optical data analysis is challenging.
Mind the Information Gap: Unveiling Detailed Morphologies of z 0.5-1.0 Galaxies with SLACS Strong Lenses and Data-Driven Analysis
Pixellated Posterior Sampling of Point Spread Functions in Astronomical Images
We introduce a novel framework for upsampled Point Spread Function (PSF) modeling using pixel-level Bayesian inference. Accurate PSF charact… (see more)erization is critical for precision measurements in many fields including: weak lensing, astrometry, and photometry. Our method defines the posterior distribution of the pixelized PSF model through the combination of an analytic Gaussian likelihood and a highly expressive generative diffusion model prior, trained on a library of HST ePSF templates. Compared to traditional methods (parametric Moffat, ePSF template-based, and regularized likelihood), we demonstrate that our PSF models achieve orders of magnitude higher likelihood and residuals consistent with noise, all while remaining visually realistic. Further, the method applies even for faint and heavily masked point sources, merely producing a broader posterior. By recovering a realistic, pixel-level posterior distribution, our technique enables the first meaningful propagation of detailed PSF morphological uncertainty in downstream analysis. An implementation of our posterior sampling procedure is available on GitHub.
Blind Strong Gravitational Lensing Inversion: Joint Inference of Source and Lens Mass with Score-Based Models
Bridging Simulators with Conditional Optimal Transport
The spatially-resolved effect of mergers on the stellar mass assembly of MaNGA galaxies
Eirini Angeloudi
Marc Huertas-Company
Jesús Falcón-Barroso
Alina Boecker
Understanding the origin of stars within a galaxy - whether formed in-situ or accreted from other galaxies (ex-situ) - is key to constrainin… (see more)g its evolution. Spatially resolving these components provides crucial insights into a galaxy's mass assembly history. We aim to predict the spatial distribution of ex-situ stellar mass fraction in MaNGA galaxies, and to identify distinct assembly histories based on the radial gradients of these predictions in the central regions. We employ a diffusion model trained on mock MaNGA analogs (MaNGIA), derived from the TNG50 cosmological simulation. The model learns to predict the posterior distribution of resolved ex-situ stellar mass fraction maps, conditioned on stellar mass density, velocity, and velocity dispersion gradient maps. After validating the model on an unseen test set from MaNGIA, we apply it to MaNGA galaxies to infer the spatially-resolved distribution of their ex-situ stellar mass fractions - i.e. the fraction of stellar mass in each spaxel originating from mergers. We identify four broad categories of ex-situ mass distributions: flat gradient, in-situ dominated; flat gradient, ex-situ dominated; positive gradient; and negative gradient. The vast majority of MaNGA galaxies fall in the first category - flat gradients with low ex-situ fractions - confirming that in-situ star formation is the main assembly driver for low- to intermediate-mass galaxies. At high stellar masses, the ex-situ maps are more diverse, highlighting the key role of mergers in building the most massive systems. Ex-situ mass distributions correlate with morphology, star-formation activity, stellar kinematics, and environment, indicating that accretion history is a primary factor shaping massive galaxies. Finally, by tracing their assembly histories in TNG50, we link each class to distinct merger scenarios, ranging from secular evolution to merger-dominated growth.
Predicting the Subhalo Mass Functions in Simulations from Galaxy Images
Tri Nguyen
J. Rose
Chris Lovell
Francisco Villaescusa-navarro
caskade: building Pythonic scientific simulators
Massive Extremely High-velocity Outflow in the Quasar J164653.72+243942.2
Paola Rodríguez Hidalgo
Hyunseop Choi (최현섭)
Patrick B. Hall
Karen M. Leighly
Liliana Flores
Mikel M. Charles
Cora DeFrancesco
We present the analysis of one of the most extreme quasar outflows found to date in our survey of extremely high velocity outflows (EHVO). J… (see more)164653.72+243942.2 (z ~ 3.04) shows variable CIV1548,1551 absorption at speeds larger than 0.1c, accompanied by SiIV, NV and Lya, and disappearing absorption at lower speeds. We perform absorption measurements using the Apparent Optical Depth method and SimBAL. We find the absorption to be very broad (Δv ~35,100 km/s in the first epoch and ~13,000 km/s in the second one) and fast (vmax ~ -50,200 km/s and -49,000 km/s, respectively). We measure large column densities (