Portrait de Siamak Ravanbakhsh

Siamak Ravanbakhsh

Membre académique principal
Chaire en IA Canada-CIFAR
Professeur agrégé, McGill University, École d'informatique
Sujets de recherche
Alignement de l'IA
Apprentissage actif
Apprentissage de représentations
Apprentissage par renforcement
Apprentissage profond
Apprentissage sur graphes
Généralisation
IA pour la science
Inférence bayésienne
Modèles génératifs
Modèles probabilistes
Raisonnement
Symétrie

Biographie

Siamak Ravanbakhsh est professeur adjoint à l’École d’informatique de l’Université McGill depuis août 2019. Avant de se joindre à McGill et à Mila – Institut québécois d’intelligence artificielle, il a occupé un poste similaire à l’Université de la Colombie-Britannique. De 2015 à 2017, il a été stagiaire postdoctoral au Département d’apprentissage automatique et à l’Institut de robotique de l’Université Carnegie Mellon, et il a obtenu un doctorat de l’Université de l’Alberta. Il s’intéresse aux problèmes de l’apprentissage de la représentation et de l’inférence dans l’IA.

Ses recherches actuelles portent sur le rôle de la symétrie et de l’invariance dans l’apprentissage profond des représentations.

Étudiants actuels

Doctorat - McGill
Co-superviseur⋅e :
Maîtrise recherche - McGill
Superviseur⋅e principal⋅e :
Maîtrise recherche - McGill
Maîtrise professionnelle - McGill
Doctorat - McGill
Doctorat - McGill
Maîtrise recherche - McGill
Stagiaire de recherche - McGill
Doctorat - McGill
Collaborateur·rice de recherche - McGill
Postdoctorat - McGill
Maîtrise recherche - McGill
Collaborateur·rice alumni - McGill

Publications

Equivariant Networks for Pixelized Spheres
Pixelizations of Platonic solids such as the cube and icosahedron have been widely used to represent spherical data, from climate records to… (voir plus) Cosmic Microwave Background maps. Platonic solids have well-known global symmetries. Once we pixelize each face of the solid, each face also possesses its own local symmetries in the form of Euclidean isometries. One way to combine these symmetries is through a hierarchy. However, this approach does not adequately model the interplay between the two levels of symmetry transformations. We show how to model this interplay using ideas from group theory, identify the equivariant linear maps, and introduce equivariant padding that respects these symmetries. Deep networks that use these maps as their building blocks generalize gauge equivariant CNNs on pixelized spheres. These deep networks achieve state-of-the-art results on semantic segmentation for climate data and omnidirectional image processing. Code is available at https://git.io/JGiZA.
Deep Generative Models for Galaxy Image Simulations
François Lanusse
Rachel Mandelbaum
Chun-Liang Li
Peter Freeman
Barnabás Póczos
Image simulations are essential tools for preparing and validating the analysis of current and future wide-field optical surveys. However, t… (voir plus)he galaxy models used as the basis for these simulations are typically limited to simple parametric light profiles, or use a fairly limited amount of available space-based data. In this work, we propose a methodology based on Deep Generative Models to create complex models of galaxy morphologies that may meet the image simulation needs of upcoming surveys. We address the technical challenges associated with learning this morphology model from noisy and PSF-convolved images by building a hybrid Deep Learning/physical Bayesian hierarchical model for observed images, explicitly accounting for the Point Spread Function and noise properties. The generative model is further made conditional on physical galaxy parameters, to allow for sampling new light profiles from specific galaxy populations. We demonstrate our ability to train and sample from such a model on galaxy postage stamps from the HST/ACS COSMOS survey, and validate the quality of the model using a range of second- and higher-order morphology statistics. Using this set of statistics, we demonstrate significantly more realistic morphologies using these deep generative models compared to conventional parametric models. To help make these generative models practical tools for the community, we introduce GalSim-Hub, a community-driven repository of generative models, and a framework for incorporating generative models within the GalSim image simulation software.
Recovering the Wedge Modes Lost to 21-cm Foregrounds
Samuel Gagnon-Hartman
Yue Cui
Adrian Liu
One of the critical challenges facing imaging studies of the 21-cm signal at the Epoch of Reionization (EoR) is the separation of astrophysi… (voir plus)cal foreground contamination. These foregrounds are known to lie in a wedge-shaped region of
Universal Equivariant Multilayer Perceptrons
Group invariant and equivariant Multilayer Perceptrons (MLP), also known as Equivariant Networks, have achieved remarkable success in learni… (voir plus)ng on a variety of data structures, such as sequences, images, sets, and graphs. Using tools from group theory, this paper proves the universality of a broad class of equivariant MLPs with a single hidden layer. In particular, it is shown that having a hidden layer on which the group acts regularly is sufficient for universal equivariance (invariance). A corollary is unconditional universality of equivariant MLPs for Abelian groups, such as CNNs with a single hidden layer. A second corollary is the universality of equivariant MLPs with a high-order hidden layer, where we give both group-agnostic bounds and means for calculating group-specific bounds on the order of hidden layer that guarantees universal equivariance (invariance).
Equivariant Maps for Hierarchical Structures
Machine Learning Advantages in Canadian Astrophysics
Kim Venn
Sébastien Fabbro
Adrian Liu
Gwendolyn Eadie
Sara Ellison
Joanna Woo
JJ Kavelaars
Kwang Moo Yi
Renée Hložek
Jo Bovy
Hossen Teimoorinia
Locke Spencer
The application of machine learning (ML) methods to the analysis of astrophysical datasets is on the rise, particularly as the computing pow… (voir plus)er and complex algorithms become more powerful and accessible. As the field of ML enjoys a continuous stream of breakthroughs, its applications demonstrate the great potential of ML, ranging from achieving tens of millions of times increase in analysis speed (e.g., modeling of gravitational lenses or analysing spectroscopic surveys) to solutions of previously unsolved problems (e.g., foreground subtraction or efficient telescope operations). The number of astronomical publications that include ML has been steadily increasing since 2010.
With the advent of extremely large datasets from a new generation of surveys in the 2020s, ML methods will become an indispensable tool in astrophysics. Canada is an unambiguous world leader in the development of the field of machine learning, attracting large investments and skilled researchers to its prestigious AI Research Institutions. This provides a unique opportunity for Canada to also be a world leader in the application of machine learning in the field of astrophysics, and foster the training of a new generation of highly skilled researchers.
Equivariant Entity-Relationship Networks
Devon Graham
The relational model is a ubiquitous representation of big-data, in part due to its extensive use in databases. In this paper, we propose th… (voir plus)e Equivariant Entity-Relationship Network (EERN), which is a Multilayer Perceptron equivariant to the symmetry transformations of the Entity-Relationship model. To this end, we identify the most expressive family of linear maps that are exactly equivariant to entity relationship symmetries, and further show that they subsume recently introduced equivariant maps for sets, exchangeable tensors, and graphs. The proposed feed-forward layer has linear complexity in the data and can be used for both inductive and transductive reasoning about relational databases, including database embedding, and the prediction of missing records. This provides a principled theoretical foundation for the application of deep learning to one of the most abundant forms of data. Empirically, EERN outperforms different variants of coupled matrix tensor factorization in both synthetic and real-data experiments.