Portrait de Dominique Beaini n'est pas disponible

Dominique Beaini

Membre industriel associé
Professeur associé, Université de Montréal, Département d'informatique et de recherche opérationnelle
Chef de la recherche graphique, Valence Discovery

Biographie

Je suis actuellement chef d’équipe de l’unité de recherche de Valence Discovery, l’une des principales entreprises dans le domaine de l’apprentissage automatique appliqué à la découverte de médicaments, et professeur associé au Département d’informatique et de recherche opérationnelle (DIRO) de l’Université de Montréal. Mon objectif est d’amener l’apprentissage automatique vers une meilleure compréhension des molécules et de leurs interactions avec la biologie humaine. Je suis titulaire d’un doctorat de Polytechnique Montréal; mes recherches antérieures portaient sur la robotique et la vision par ordinateur.

Mes intérêts de recherche sont les réseaux neuronaux de graphes, l’apprentissage autosupervisé, la mécanique quantique, la découverte de médicaments, la vision par ordinateur et la robotique.

Étudiants actuels

Visiteur de recherche indépendant
Maîtrise recherche - Université de Montréal
Collaborateur·rice de recherche - RWTH
Maîtrise recherche - Université de Montréal
Stagiaire de recherche - Université de Montréal
Maîtrise recherche - Université de Montréal
Collaborateur·rice de recherche - Valence
Co-superviseur⋅e :

Publications

Long Range Graph Benchmark
Vijay Prakash Dwivedi
Ladislav Rampášek
Mikhail Galkin
Ali Parviz
Anh Tuan Luu
Graph Neural Networks (GNNs) that are based on the message passing (MP) paradigm generally exchange information between 1-hop neighbors to b… (voir plus)uild node representations at each layer. In principle, such networks are not able to capture long-range interactions (LRI) that may be desired or necessary for learning a given task on graphs. Recently, there has been an increasing interest in development of Transformer-based methods for graphs that can consider full node connectivity beyond the original sparse structure, thus enabling the modeling of LRI. However, MP-GNNs that simply rely on 1-hop message passing often fare better in several existing graph benchmarks when combined with positional feature representations, among other innovations, hence limiting the perceived utility and ranking of Transformer-like architectures. Here, we present the Long Range Graph Benchmark (LRGB) with 5 graph learning datasets: PascalVOC-SP, COCO-SP, PCQM-Contact, Peptides-func and Peptides-struct that arguably require LRI reasoning to achieve strong performance in a given task. We benchmark both baseline GNNs and Graph Transformer networks to verify that the models which capture long-range dependencies perform significantly better on these tasks. Therefore, these datasets are suitable for benchmarking and exploration of MP-GNNs and Graph Transformer architectures that are intended to capture LRI.
Recipe for a General, Powerful, Scalable Graph Transformer
Ladislav Rampášek
Mikhail Galkin
Vijay Prakash Dwivedi
Anh Tuan Luu
We propose a recipe on how to build a general, powerful, scalable (GPS) graph Transformer with linear complexity and state-of-the-art result… (voir plus)s on a diverse set of benchmarks. Graph Transformers (GTs) have gained popularity in the field of graph representation learning with a variety of recent publications but they lack a common foundation about what constitutes a good positional or structural encoding, and what differentiates them. In this paper, we summarize the different types of encodings with a clearer definition and categorize them as being
Directional Graph Networks
Saro Passaro
Vincent Létourneau
William Hamilton
Gabriele Corso
Pietro Lio
The lack of anisotropic kernels in graph neural networks (GNNs) strongly limits their expressiveness, contributing to well-known issues such… (voir plus) as over-smoothing. To overcome this limitation, we propose the first globally consistent anisotropic kernels for GNNs, allowing for graph convolutions that are defined according to topologicaly-derived directional flows. First, by defining a vector field in the graph, we develop a method of applying directional derivatives and smoothing by projecting node-specific messages into the field. Then, we propose the use of the Laplacian eigenvectors as such vector field. We show that the method generalizes CNNs on an
Rethinking Graph Transformers with Spectral Attention
Devin Kreuzer
William Hamilton
Vincent Létourneau
Prudencio Tossou
In recent years, the Transformer architecture has proven to be very successful in sequence processing, but its application to other data str… (voir plus)uctures, such as graphs, has remained limited due to the difficulty of properly defining positions. Here, we present the
Improving Convolutional Neural Networks Via Conservative Field Regularisation and Integration
Sofiane Wozniak Achiche
Maxime Raison
Saliency Enhancement using Gradient Domain Edges Merging
Sofiane Wozniak Achiche
Alexandre Duperre
Maxime Raison
In recent years, there has been a rapid progress in solving the binary problems in computer vision, such as edge detection which finds the b… (voir plus)oundaries of an image and salient object detection which finds the important object in an image. This progress happened thanks to the rise of deep-learning and convolutional neural networks (CNN) which allow to extract complex and abstract features. However, edge detection and saliency are still two different fields and do not interact together, although it is intuitive for a human to detect salient objects based on its boundaries. Those features are not well merged in a CNN because edges and surfaces do not intersect since one feature represents a region while the other represents boundaries between different regions. In the current work, the main objective is to develop a method to merge the edges with the saliency maps to improve the performance of the saliency. Hence, we developed the gradient-domain merging (GDM) which can be used to quickly combine the image-domain information of salient object detection with the gradient-domain information of the edge detection. This leads to our proposed saliency enhancement using edges (SEE) with an average improvement of the F-measure of at least 3.4 times higher on the DUT-OMRON dataset and 6.6 times higher on the ECSSD dataset, when compared to competing algorithm such as denseCRF and BGOF. The SEE algorithm is split into 2 parts, SEE-Pre for preprocessing and SEE-Post pour postprocessing.
Principal Neighbourhood Aggregation for Graph Nets
Gabriele Corso
Luca Cavalleri
Pietro Lio
Petar Veličković