Portrait of Yashar Hezaveh

Yashar Hezaveh

Associate Academic Member
Assistant Professor, Université de Montréal, Department of Physics
Research Topics
Computer Vision
Deep Learning
Representation Learning

Biography

Yashar Hezaveh is an associate academic member of Mila – Quebec Artificial Intelligence Institute and director of the Montréal Institute for Astrophysical Data Analysis and Machine Learning (Ciela). He is an assistant professor in the Department of Physics at Université de Montréal and the Canada Research Chair in Astrophysical Data Analysis and Machine Learning. In addition, Hezaveh is an associate member of McGill University’s Trottier Space Institute, and a visiting fellow at the Center for Computational Astrophysics at Flatiron Institute in New York and at the Perimeter Institute for Theoretical Physics in Waterloo, Ontario. He was previously a research fellow at the Flatiron Institute (2018–2019) and a NASA Hubble Fellow at Stanford University (2013–2018).

Hezaveh is a world leader in the analysis of astrophysical data using deep learning. His current research focuses primarily on Bayesian inference in AI, the goal being to learn about the distribution of dark matter in strongly lensed galaxies using data from large cosmological surveys. His research is supported by the Schmidt Futures Foundation and the Simons Foundation.

Current Students

PhD - Université de Montréal
Principal supervisor :
PhD - Université de Montréal
Research Intern - Université de Montréal
Principal supervisor :
Master's Research - McGill University
PhD - Université de Montréal
Principal supervisor :
PhD - Université de Montréal
Postdoctorate - Université de Montréal
Principal supervisor :
Master's Research - Université de Montréal
Co-supervisor :
Master's Research - Université de Montréal
PhD - Université de Montréal
Principal supervisor :
Postdoctorate - Université de Montréal
Master's Research - McGill University
Postdoctorate - McGill University
Principal supervisor :
Postdoctorate - Université de Montréal
Co-supervisor :
Postdoctorate - Université de Montréal
Principal supervisor :

Publications

Transformer Embeddings for Fast Microlensing Inference
Neural Deprojection of Galaxy Stellar Mass Profiles
M. J. Yantovski-Barth
Hengyue Zhang
Martin Bureau
We introduce a neural approach to dynamical modeling of galaxies that replaces traditional imaging-based deprojections with a differentiable… (see more) mapping. Specifically, we train a neural network to translate Nuker profile parameters into analytically deprojectable Multi Gaussian Expansion components, enabling physically realistic stellar mass models without requiring optical observations. We integrate this model into SuperMAGE, a differentiable dynamical modelling pipeline for Bayesian inference of supermassive black hole masses. Applied to ALMA data, our approach finds results consistent with state-of-the-art models while extending applicability to dust-obscured and active galaxies where optical data analysis is challenging.
Mind the Information Gap: Unveiling Detailed Morphologies of z 0.5-1.0 Galaxies with SLACS Strong Lenses and Data-Driven Analysis
Pixellated Posterior Sampling of Point Spread Functions in Astronomical Images
We introduce a novel framework for upsampled Point Spread Function (PSF) modeling using pixel-level Bayesian inference. Accurate PSF charact… (see more)erization is critical for precision measurements in many fields including: weak lensing, astrometry, and photometry. Our method defines the posterior distribution of the pixelized PSF model through the combination of an analytic Gaussian likelihood and a highly expressive generative diffusion model prior, trained on a library of HST ePSF templates. Compared to traditional methods (parametric Moffat, ePSF template-based, and regularized likelihood), we demonstrate that our PSF models achieve orders of magnitude higher likelihood and residuals consistent with noise, all while remaining visually realistic. Further, the method applies even for faint and heavily masked point sources, merely producing a broader posterior. By recovering a realistic, pixel-level posterior distribution, our technique enables the first meaningful propagation of detailed PSF morphological uncertainty in downstream analysis. An implementation of our posterior sampling procedure is available on GitHub.
Blind Strong Gravitational Lensing Inversion: Joint Inference of Source and Lens Mass with Score-Based Models
Bridging Simulators with Conditional Optimal Transport
Predicting the Subhalo Mass Functions in Simulations from Galaxy Images
Tri Nguyen
J. Rose
Chris Lovell
Francisco Villaescusa-navarro
The Interpolation Constraint in the RV Analysis of M-Dwarfs Using Empirical Templates
Nicolas B. Cowan
E. Artigau
René Doyon
André M. Silva
Khaled Al Moulla
Precise radial velocity (pRV) measurements of M-dwarfs in the near-infrared (NIR) rely on empirical templates due to the lack of accurate st… (see more)ellar spectral models in this regime. Templates are assumed to approximate the true spectrum when constructed from many observations or in the high signal-to-noise limit. We develop a numerical simulation that generates SPIRou-like pRV observations from PHOENIX spectra, constructs empirical templates, and estimates radial velocities. This simulation solely considers photon noise and evaluates when empirical templates remain reliable for pRV analysis. Our results reveal a previously unrecognized noise source in templates, establishing a fundamental floor for template-based pRV measurements. We find that templates inherently include distortions in stellar line shapes due to imperfect interpolation at the detector's sampling resolution. The magnitude of this interpolation error depends on sampling resolution and RV content. Consequently, while stars with a higher RV content, such as cooler M-dwarfs are expected to yield lower RV uncertainties, their dense spectral features can amplify interpolation errors, potentially biasing RV estimates. For a typical M4V star, SPIRou's spectral and sampling resolution imposes an RV uncertainty floor of 0.5-0.8 m/s, independent of the star's magnitude or the telescope's aperture. These findings reveal a limitation of template-based pRV methods, underscoring the need for improved spectral modeling and better-than-Nyquist detector sampling to reach the next level of RV precision.
caskade: building Pythonic scientific simulators
Robustness of Neural Ratio and Posterior Estimators to Distributional Shifts for Population-Level Dark Matter Analysis in Strong Gravitational Lensing
We investigate the robustness of Neural Ratio Estimators (NREs) and Neural Posterior Estimators (NPEs) to distributional shifts in the conte… (see more)xt of measuring the abundance of dark matter subhalos using strong gravitational lensing data. While these data-driven inference frameworks can be accurate on test data from the same distribution as the training sets, in real applications, it is expected that simulated training data and true observational data will differ in their distributions. We explore the behavior of a trained NRE and trained sequential NPEs to estimate the population-level parameters of dark matter subhalos from a large sample of images of strongly lensed galaxies with test data presenting distributional shifts within and beyond the bounds of the training distribution in the nuisance parameters (e.g., the background source morphology). While our results show that NREs and NPEs perform well when tested perfectly in distribution, they exhibit significant biases when confronted with slight deviations from the examples seen in the training distribution. This indicates the necessity for caution when applying NREs and NPEs to real astrophysical data, where high-dimensional underlying distributions are not perfectly known.
Gravitational-Wave Parameter Estimation in non-Gaussian noise using Score-Based Likelihood Characterization
Maximiliano Isi
Kaze W. K. Wong
Gravitational-wave (GW) parameter estimation typically assumes that instrumental noise is Gaussian and stationary. Obvious departures from t… (see more)his idealization are typically handled on a case-by-case basis, e.g., through bespoke procedures to ``clean'' non-Gaussian noise transients (glitches), as was famously the case for the GW170817 neutron-star binary. Although effective, manipulating the data in this way can introduce biases in the inference of key astrophysical properties, like binary precession, and compound in unpredictable ways when combining multiple observations; alternative procedures free of the same biases, like joint inference of noise and signal properties, have so far proved too computationally expensive to execute at scale. Here we take a different approach: rather than explicitly modeling individual non-Gaussianities to then apply the traditional GW likelihood, we seek to learn the true distribution of instrumental noise without presuming Gaussianity and stationarity in the first place. Assuming only noise additivity, we employ score-based diffusion models to learn an empirical noise distribution directly from detector data and then combine it with a deterministic waveform model to provide an unbiased estimate of the likelihood function. We validate the method by performing inference on a subset of GW parameters from 400 mock observations, containing real LIGO noise from either the Livingston or Hanford detectors. We show that the proposed method can recover the true parameters even in the presence of loud glitches, and that the inference is unbiased over a population of signals without applying any cleaning to the data. This work provides a promising avenue for extracting unbiased source properties in future GW observations over the coming decade.
The CASTOR mission
Patrick Côté
T. Woods
John Hutchings
J. Rhodes
R. Sánchez-Janssen
Alan D. Scott
J. Pazder
Melissa Amenouche
Michael Balogh
Simon Blouin
Alain Cournoyer
M. Drout
Nick Kuzmin
Katherine J. Mack
Laura Ferrarese
Wesley C. Fraser
S. Gallagher
Frederic J. Grandmont
Daryl Haggard
P. Harrison … (see 160 more)
V. Hénault-Brunet
J. Kavelaars
V. Khatu
J. Roediger
J. Rowe
Marcin Sawicki
Jesper Skottfelt
Matt Taylor
L. van Waerbeke
Laurie Amen
Dhananjhay Bansal
Martin Bergeron
Toby Brown
Greg Burley
Hum Chand
Isaac Cheng
Ryan Cloutier
N. Dickson
Oleg Djazovski
Ivana Damjanov
James Doherty
K. Finner
Macarena García Del Valle Espinosa
Jennifer Glover
A. I. Gómez de Castro
Or Graur
Tim Hardy
Michelle Kao
D A Leahy
Deborah Lokhorst
A. I. Malz
Allison Man
Madeline A. Marshall
Sean McGee
Ryan McKenzie
Kai Michaud
Surhud S. More
David Morris
Patrick W. Morris
T. Moutard
Wasi Naqvi
Matthew Nicholl
G. Noirot
M. S. Oey
C. Opitom
Samir Salim
Bryan R. Scott
Charles Shapiro
Daniel Stern
Ashwin Subramaniam
David Thilke
I. Wevers
Dmitri Vorobiev
L. Y. Aaron Yung
Frédéric Zamkotsian
S. Aigrain
A. Alavi
Martin Barstow
Peter Bartosik
H. Bluhm
J. Bovy
Peter Cameron
R. Carlberg
J. Christiansen
Yuyang Chen
P. Crowther
Kristen Dage
Aaron Dotter
Patrick Dufour
Jean Dupuis
B. Dryer
A. Duara
Gwendolyn M. Eadie
Marielle R. Eduardo
V. Estrada-Carpenter
Sébastien Fabbro
A. Faisst
N. M. Ford
M. Fraser
Boris T. Gaensicke
Shashkiran Ganesh
Poshak Gandhi
Melissa L. Graham
R. Hamel
Martin Hellmich
John J. Hennessy
Kaitlyn Hessel
J. Heyl
Catherine Heymans
Renée Hložek
Michael Hoenk
Andrew Holland
Eric Huff
Ian Hutchinson
I. Iwata
April D. Jewell
Doug Johnstone
Maia Jones
Todd J. Jones
D. Lang
J. Lapington
Justin Larivière
C. Lawlor-Forsyth
Denis Laurin
Charles Lee
Ting S. Li
S. Lim
B. Ludwig
Matt Kozun
V. M
Robert Mann
Alan McConnachie
Evan McDonough
S. Metchev
David R. Miller
Takashi Moriya
Cameron Morgan
Julio F. Navarro
Y. Nazé
Shouleh Nikzad
Vivek Oad
N. N.-Q. Ouellette
E. Pass
Will J. Percival
Joe Postma
Nayyer Raza
G. T. Richards
Harvey Richer
Carmelle Robert
Erik Rosolowsky
J. Ruan
Sarah Rugheimer
S. Safi-Harb
Kanak Saha
Vicky Scowcroft
F. Sestito
Himanshu Sharma
James Sikora
G. Sivakoff
T. S. Sivarani
Patrick Smith
Warren Soh
R. Sorba
S. Subramanian
Hossen Teimoorinia
H. Teplitz
Shaylin Thadani
Shavon Thadani
Aaron Tohuvavohu
K. Venn
Nicholas Vieira
Jeremy J. Webb
P. Wiegert
Ryan Wierckx
Yanqin Wu
J. Yeung
S. K. Yi