Portrait of Guillaume Rabusseau

Guillaume Rabusseau

Core Academic Member
Canada CIFAR AI Chair
Assistant Professor, Université de Montréal, Department of Computer Science and Operations Research
Research Topics
Deep Learning
Graph Neural Networks
Learning on Graphs
Machine Learning Theory
Probabilistic Models
Quantum Information Theory
Recommender Systems
Recurrent Neural Networks
Tensor Factorization

Biography

I have been an assistant professor at Mila – Quebec Artificial Intelligence Institute and in the Department of Computer Science and Operations Research (DIRO) at Université de Montréal (UdeM) since September 2018. I was awarded a Canada CIFAR AI Chair in March 2019. Before joining UdeM, I was a postdoctoral research fellow in the Reasoning and Learning Lab at McGill University, where I worked with Prakash Panangaden, Joelle Pineau and Doina Precup.

I obtained my PhD in 2016 from Aix-Marseille University (AMU) in France, where I worked in the Qarma team (Machine Learning and Multimedia) under the supervision of François Denis and Hachem Kadri. I also obtained my MSc in fundamental computer science and my BSc in computer science from AMU. I am interested in tensor methods for machine learning and in designing learning algorithms for structured data by leveraging linear and multilinear algebra (e.g., spectral methods).

Current Students

Master's Research - Université de Montréal
Postdoctorate - Université de Montréal
PhD - Université de Montréal
PhD - Université de Montréal
Collaborating researcher - University of Mannheim
Co-supervisor :
PhD - Université de Montréal
Co-supervisor :
PhD - McGill University
Principal supervisor :
PhD - Université de Montréal
Master's Research - McGill University
Principal supervisor :
Collaborating researcher
Co-supervisor :
PhD - Université de Montréal
Co-supervisor :
PhD - Université de Montréal
Co-supervisor :
PhD - Université de Montréal

Publications

Connecting Weighted Automata, Tensor Networks and Recurrent Neural Networks through Spectral Learning
Generative Learning of Continuous Data by Tensor Networks
Alex Meiburg
Jing Chen
Jacob Miller
Raphaelle Tihon
Alejandro Perdomo-ortiz
Temporal Graph Benchmark for Machine Learning on Temporal Graphs
Shenyang Huang
Farimah Poursafaei
Jacob Danovitch
Matthias Fey
Weihua Hu
Emanuele Rossi
Jure Leskovec
Michael M. Bronstein
We present the Temporal Graph Benchmark (TGB), a collection of challenging and diverse benchmark datasets for realistic, reproducible, and r… (see more)obust evaluation of machine learning models on temporal graphs. TGB datasets are of large scale, spanning years in duration, incorporate both node and edge-level prediction tasks and cover a diverse set of domains including social, trade, transaction, and transportation networks. For both tasks, we design evaluation protocols based on realistic use-cases. We extensively benchmark each dataset and find that the performance of common models can vary drastically across datasets. In addition, on dynamic node property prediction tasks, we show that simple methods often achieve superior performance compared to existing temporal graph models. We believe that these findings open up opportunities for future research on temporal graphs. Finally, TGB provides an automated machine learning pipeline for reproducible and accessible temporal graph research, including data loading, experiment setup and performance evaluation. TGB will be maintained and updated on a regular basis and welcomes community feedback. TGB datasets, data loaders, example codes, evaluation setup, and leaderboards are publicly available at https://tgb.complexdatalab.com/.
ROSA: Random Orthogonal Subspace Adaptation
Marawan Gamal
Aristides Milios
Fast and Attributed Change Detection on Dynamic Graphs with Density of States
Shenyang Huang
Jacob Danovitch
Recurrent Real-valued Neural Autoregressive Density Estimator for Online Density Estimation and Classification of Streaming Data
Tianyu Li
Bogdan Mazoure
In contrast with the traditional offline learning, where complete data accessibility is assumed, many modern applications involve processing… (see more) data in a streaming fashion. This online learning setting raises various challenges, including concept drift, hardware memory constraints, etc. In this paper, we propose the Recurrent Real-valued Neural Autoregressive Density Estimator (RRNADE), a flexible density-based model for online classification and density estimation. RRNADE combines a neural Gaussian mixture density module with a recurrent module. This combination allows RRNADE to exploit possible sequential correlations in the streaming task, which are often ignored in the classical streaming setting where each input is assumed to be independent from the previous ones. We showcase the ability of RRNADE to adapt to concept drifts on synthetic density estimation tasks. We also apply RRNADE to online classification tasks on both real world and synthetic datasets and compare it with multiple density based as well as nondensity based online classification methods. In almost all of these tasks, RRNADE outperforms the other methods. Lastly, we conduct an ablation study demonstrating the complementary benefits of the density and the recurrent modules.
Benchmarking State-Merging Algorithms for Learning Regular Languages.
Adil Soubki
Jeffrey Heinz
François Coste
Faissal Ouardi
Explaining Graph Neural Networks Using Interpretable Local Surrogates
Farzaneh Heidari
Perouz Taslakian
We propose an interpretable local surrogate (ILS) method for understanding the predictions of black-box graph models. Explainability methods… (see more) are commonly employed to gain insights into black-box models and, given the widespread adoption of GNNs in diverse applications, understanding the underlying reasoning behind their decision-making processes becomes crucial. Our ILS method approximates the behavior of a black-box graph model by fitting a simple surrogate model in the local neighborhood of a given input example. Leveraging the interpretability of the surrogate, ILS is able to identify the most relevant nodes contributing to a specific prediction. To efficiently identify these nodes, we utilize group sparse linear models as local surrogates. Through empirical evaluations on explainability benchmarks, our method consistently outperforms state-of-the-art graph explainability methods. This demonstrates the effectiveness of our approach in providing enhanced interpretability for GNN predictions.
Formal and Empirical Studies of Counting Behaviour in ReLU RNNs.
Nadine El-Naggar
Andrew Ryzhikov
Laure Daviaud
Pranava Madhyastha
Tillman Weyde
François Coste
Faissal Ouardi
Identification of Substitutable Context-Free Languages over Infinite Alphabets from Positive Data
Yutaro Numaya
Diptarama Hendrian
Ryo Yoshinaka
Ayumi Shinohara
François Coste
Faissal Ouardi
This paper is concerned with the identification in the limit from positive data of sub-stitutable context-free languages cfl s) over infinit… (see more)e alphabets. Clark and Eyraud (2007) showed that substitutable cfl s over finite alphabets are learnable in this learning paradigm. We show that substitutable cfl s generated by grammars whose production rules may have predicates that represent sets of potentially infinitely many terminal symbols in a compact manner are learnable if the terminal symbol sets represented by those predicates are learnable, under a certain condition. This can be seen as a result parallel to Argyros and D’Antoni’s work (2018) that amplifies the query learnability of predicate classes to that of symbolic automata classes. Our result is the first that shows such amplification is possible for identifying some cfl s in the limit from positive data.
Learning Syntactic Monoids from Samples by extending known Algorithms for learning State Machines
Simon Dieck
Sicco Verwer
François Coste
Faissal Ouardi
For the inference of regular languages, most current methods learn a version of deterministic finite automata. Syntactic monoids are an alte… (see more)rnative representation of regular languages, which have some advantages over automata. For example, traces can be parsed starting from any index and the star-freeness of the language they represent can be checked in polynomial time. But, to date, there existed no passive learning algorithm for syntactic monoids. In this paper, we prove that known state-merging algorithms for learning deterministic finite automata can be instrumented to learn syntactic monoids instead, by using as the input a special structure proposed in this paper: the interfix-graph. Further, we introduce a method to encode frequencies on the interfix-graph, such that models can also be learned from only positive traces. We implemented this structure and performed experiments with both traditional data and data containing only positive traces. As such this work answers basic theoretical and experimental questions regarding a novel passive learning algorithm for syntactic monoids.
Lower Bounds for Active Automata Learning.
Loes Kruger
Bharat Garhewal
François Coste
Frits W. Vaandrager
Faissal Ouardi