Publications

VisPaD: Visualization and Pattern Discovery for Fighting Human Trafficking
Catalina Vajiac
Andreas Olligschlaeger
Meng-Chieh Lee
Namyong Park
Duen Horng Chau
Christos Faloutsos
Chieh Lee
RetroGNN: Fast Estimation of Synthesizability for Virtual Screening and De Novo Design by Learning from Slow Retrosynthesis Software
Cheng-Hao Liu
Stanisław Jastrzębski
Paweł Włodarczyk-Pruszyński
Marwin Segler
Learning how to Interact with a Complex Interface using Hierarchical Reinforcement Learning
Gheorghe Comanici
Amelia Glaese
Anita Gergely
Daniel Toyama
Tyler Jackson
Hierarchical Reinforcement Learning (HRL) allows interactive agents to decompose complex problems into a hierarchy of sub-tasks. Higher-leve… (see more)l tasks can invoke the solutions of lower-level tasks as if they were primitive actions. In this work, we study the utility of hierarchical decompositions for learning an appropriate way to interact with a complex interface. Specifically, we train HRL agents that can interface with applications in a simulated Android device. We introduce a Hierarchical Distributed Deep Reinforcement Learning architecture that learns (1) subtasks corresponding to simple finger gestures, and (2) how to combine these gestures to solve several Android tasks. Our approach relies on goal conditioning and can be used more generally to convert any base RL agent into an HRL agent. We use the AndroidEnv environment to evaluate our approach. For the experiments, the HRL agent uses a distributed version of the popular DQN algorithm to train different components of the hierarchy. While the native action space is completely intractable for simple DQN agents, our architecture can be used to establish an effective way to interact with different tasks, significantly improving the performance of the same DQN agent over different levels of abstraction.
Local Learning with Neuron Groups
Adeetya Patel
Michael Eickenberg
A Strong Node Classification Baseline for Temporal Graphs
Microscopy-BIDS: An Extension to the Brain Imaging Data Structure for Microscopy Data
Marie-Hélène Bourget
Lee Kamentsky
Satrajit S. Ghosh
Giacomo Mazzamuto
Alberto Lazari
Christopher J. Markiewicz
Robert Oostenveld
Guiomar Niso
Yaroslav O. Halchenko
Ilona Lipp
Sylvain Takerkart
Paule-Joanne Toussaint
Ali R. Khan
Gustav Nilsonne
Filippo Maria Castelli
Stefan Ross Eric Franklin Anthony Rémi Christopher J. Taylor Appelhoff
The Brain Imaging Data Structure (BIDS) is a specification for organizing, sharing, and archiving neuroimaging data and metadata in a reusab… (see more)le way. First developed for magnetic resonance imaging (MRI) datasets, the community-led specification evolved rapidly to include other modalities such as magnetoencephalography, positron emission tomography, and quantitative MRI (qMRI). In this work, we present an extension to BIDS for microscopy imaging data, along with example datasets. Microscopy-BIDS supports common imaging methods, including 2D/3D, ex/in vivo, micro-CT, and optical and electron microscopy. Microscopy-BIDS also includes comprehensible metadata definitions for hardware, image acquisition, and sample properties. This extension will facilitate future harmonization efforts in the context of multi-modal, multi-scale imaging such as the characterization of tissue microstructure with qMRI.
Masked Siamese Networks for Label-Efficient Learning
Mahmoud Assran
Mathilde Caron
Ishan Misra
Piotr Bojanowski
P Vincent
Armand Joulin
Michael G. Rabbat
We propose Masked Siamese Networks (MSN), a self-supervised learning framework for learning image representations. Our approach matches the … (see more)representation of an image view containing randomly masked patches to the representation of the original unmasked image. This self-supervised pre-training strategy is particularly scalable when applied to Vision Transformers since only the unmasked patches are processed by the network. As a result, MSNs improve the scalability of joint-embedding architectures, while producing representations of a high semantic level that perform competitively on low-shot image classification. For instance, on ImageNet-1K, with only 5,000 annotated images, our base MSN model achieves 72.4% top-1 accuracy, and with 1% of ImageNet-1K labels, we achieve 75.7% top-1 accuracy, setting a new state-of-the-art for self-supervised learning on this benchmark. Our code is publicly available.
Microscopy analysis neural network to solve detection, enumeration and segmentation from image-level annotations
Anthony Bilodeau
Constantin V.L. Delmas
Martin Parent
Paul De Koninck
PhyloPGM: boosting regulatory function prediction accuracy using evolutionary information
Supplementary data are available at Bioinformatics online.
User Experience of a Computer-Based Decision Aid for Prenatal Trisomy Screening: Mixed Methods Explanatory Study
Titilayo Tatiana Agbadje
Chantale Pilon
Pierre Bérubé
Jean-Claude Forest
François Rousseau
S. A. Rahimi
Yves Giguère
France Légaré
Deep learning of chest X-rays can predict mechanical ventilation outcome in ICU-admitted COVID-19 patients
Daniel Gourdeau
Olivier Potvin
Jason Henry Biem
Lyna Abrougui
Patrick Archambault
Carl Chartrand-Lefebvre
Louis Dieumegarde
Louis Gagnon
Raphaelle Giguère
Alexandre Hains
Marie-Hélène Lévesque
Simon Nepveu
Lorne Rosenbloom
An Tang
Issac Yang
Nathalie Duchesne … (see 1 more)
Simon Duchesne
TopiOCQA: Open-domain Conversational Question Answering with Topic Switching
Shehzaad Dhuliawala
Kaheer Suleman