Portrait of Dominique Beaini is unavailable

Dominique Beaini

Associate Industry Member
Adjunct Professor, Université de Montréal, Department of Computer Science and Operations Research
Head of Graph Research, Valence Discovery
Research Topics
Graph Neural Networks
Learning on Graphs
Molecular Modeling
Multimodal Learning

Biography

I am currently a research unit team lead at Valence Discovery, one of the leading companies in machine learning applied to drug discovery. I am also an adjunct professor at Université de Montréal, in the Department of Computer Science and Operations Research (DIRO). My goal is to push the state of machine learning toward a better understanding of molecules and their interactions with human biology. I completed my PhD at Polytechnique Montréal in the area of robotics and computer vision.

My research interests are graph neural networks, self-supervised learning, quantum mechanics, drug discovery, computer vision and robotics.

Current Students

Independent visiting researcher
Master's Research - Université de Montréal
Co-supervisor :
Master's Research - Université de Montréal
Master's Research - Université de Montréal
Collaborating researcher - RWTH
Research Intern - Université de Montréal
Collaborating researcher - Valence
Co-supervisor :

Publications

GPS++: An Optimised Hybrid MPNN/Transformer for Molecular Property Prediction
Dominic Masters
Josef Dean
Kerstin Klaeser
Zhiyi Li
Samuel Maddrell-Mander
Adam Sanders
Hatem Helal
Deniz Beker
Ladislav Rampášek
3D Infomax improves GNNs for Molecular Property Prediction
Hannes Stärk
Gabriele Corso
Prudencio Tossou
Christian Dallago
Stephan Günnemann
Pietro Lio
Molecular property prediction is one of the fastest-growing applications of deep learning with critical real-world impacts. Including 3D mol… (see more)ecular structure as input to learned models improves their predictions for many molecular properties. However, this information is infeasible to compute at the scale required by most real-world applications. We propose pre-training a model to understand the geometry of molecules given only their 2D molecular graph. Using methods from self-supervised learning, we maximize the mutual information between a 3D summary vector and the representations of a Graph Neural Network (GNN) such that they contain latent 3D information. During fine-tuning on molecules with unknown geometry, the GNN still generates implicit 3D information and can use it to inform downstream tasks. We show that 3D pre-training provides significant improvements for a wide range of molecular properties, such as a 22% average MAE reduction on eight quantum mechanical properties. Crucially, the learned representations can be effectively transferred between datasets with vastly different molecules.
Long Range Graph Benchmark
Vijay Prakash Dwivedi
Ladislav Rampášek
Mikhail Galkin
Ali Parviz
Anh Tuan Luu
Graph Neural Networks (GNNs) that are based on the message passing (MP) paradigm generally exchange information between 1-hop neighbors to b… (see more)uild node representations at each layer. In principle, such networks are not able to capture long-range interactions (LRI) that may be desired or necessary for learning a given task on graphs. Recently, there has been an increasing interest in development of Transformer-based methods for graphs that can consider full node connectivity beyond the original sparse structure, thus enabling the modeling of LRI. However, MP-GNNs that simply rely on 1-hop message passing often fare better in several existing graph benchmarks when combined with positional feature representations, among other innovations, hence limiting the perceived utility and ranking of Transformer-like architectures. Here, we present the Long Range Graph Benchmark (LRGB) with 5 graph learning datasets: PascalVOC-SP, COCO-SP, PCQM-Contact, Peptides-func and Peptides-struct that arguably require LRI reasoning to achieve strong performance in a given task. We benchmark both baseline GNNs and Graph Transformer networks to verify that the models which capture long-range dependencies perform significantly better on these tasks. Therefore, these datasets are suitable for benchmarking and exploration of MP-GNNs and Graph Transformer architectures that are intended to capture LRI.
Recipe for a General, Powerful, Scalable Graph Transformer
Ladislav Rampášek
Mikhail Galkin
Vijay Prakash Dwivedi
Anh Tuan Luu
We propose a recipe on how to build a general, powerful, scalable (GPS) graph Transformer with linear complexity and state-of-the-art result… (see more)s on a diverse set of benchmarks. Graph Transformers (GTs) have gained popularity in the field of graph representation learning with a variety of recent publications but they lack a common foundation about what constitutes a good positional or structural encoding, and what differentiates them. In this paper, we summarize the different types of encodings with a clearer definition and categorize them as being
Directional Graph Networks
Saro Passaro
Vincent Létourneau
William Hamilton
Gabriele Corso
Pietro Lio
The lack of anisotropic kernels in graph neural networks (GNNs) strongly limits their expressiveness, contributing to well-known issues such… (see more) as over-smoothing. To overcome this limitation, we propose the first globally consistent anisotropic kernels for GNNs, allowing for graph convolutions that are defined according to topologicaly-derived directional flows. First, by defining a vector field in the graph, we develop a method of applying directional derivatives and smoothing by projecting node-specific messages into the field. Then, we propose the use of the Laplacian eigenvectors as such vector field. We show that the method generalizes CNNs on an
Rethinking Graph Transformers with Spectral Attention
Devin Kreuzer
William Hamilton
Vincent Létourneau
Prudencio Tossou
In recent years, the Transformer architecture has proven to be very successful in sequence processing, but its application to other data str… (see more)uctures, such as graphs, has remained limited due to the difficulty of properly defining positions. Here, we present the
Improving Convolutional Neural Networks Via Conservative Field Regularisation and Integration
Sofiane Wozniak Achiche
Maxime Raison
Saliency Enhancement using Gradient Domain Edges Merging
Sofiane Wozniak Achiche
Alexandre Duperre
Maxime Raison
In recent years, there has been a rapid progress in solving the binary problems in computer vision, such as edge detection which finds the b… (see more)oundaries of an image and salient object detection which finds the important object in an image. This progress happened thanks to the rise of deep-learning and convolutional neural networks (CNN) which allow to extract complex and abstract features. However, edge detection and saliency are still two different fields and do not interact together, although it is intuitive for a human to detect salient objects based on its boundaries. Those features are not well merged in a CNN because edges and surfaces do not intersect since one feature represents a region while the other represents boundaries between different regions. In the current work, the main objective is to develop a method to merge the edges with the saliency maps to improve the performance of the saliency. Hence, we developed the gradient-domain merging (GDM) which can be used to quickly combine the image-domain information of salient object detection with the gradient-domain information of the edge detection. This leads to our proposed saliency enhancement using edges (SEE) with an average improvement of the F-measure of at least 3.4 times higher on the DUT-OMRON dataset and 6.6 times higher on the ECSSD dataset, when compared to competing algorithm such as denseCRF and BGOF. The SEE algorithm is split into 2 parts, SEE-Pre for preprocessing and SEE-Post pour postprocessing.
Principal Neighbourhood Aggregation for Graph Nets
Gabriele Corso
Luca Cavalleri
Pietro Lio
Petar Veličković