Publications

Using rare genetic mutations to revisit structural brain asymmetry
Jakub Kopal
Kuldeep Kumar
Kimia Shafighi
Karin Saltoun
Claudia Modenato
Clara A. Moreau
Guillaume Huguet
Martineau Jean-Louis
Charles-Olivier Martin
C.O. Martin
Zohra Saci
Nadine Younis
Elise Douard
Khadije Jizi
Alexis Beauchamp-Chatel
Leila Kushan
Ana I. Silva
Marianne B.M. van den Bree
David E.J. Linden
M. J. Owen … (see 11 more)
Jeremy Hall
Sarah Lippé
Bogdan Draganski
Ida E. Sønderby
Ole A. Andreassen
David C. Glahn
Paul M. Thompson
Carrie E. Bearden
Robert Zatorre
Sébastien Jacquemont
Fast D
<sub>M,M</sub> calculation in LDR brachytherapy using deep learning methods
Francisco Berumen
Luc Beaulieu
Meta Pseudo Labels for Anomaly Detection via Partially Observed Anomalies
Sinong Zhao
Zhaoyang Yu
Xiaofei Wang
T. Marbach
Gang Wang
Xiaoguang Liu
A stochastic integer programming approach to reserve staff scheduling with preferences
Carl Perreault‐Lafleur
Guy Desaulniers
VulANalyzeR: Explainable Binary Vulnerability Detection with Multi-task Learning and Attentional Graph Convolution
Litao Li
Steven H. H. Ding
Yuan Tian
Philippe Charland
Weihan Ou
Leo Song
Congwei Chen
SemEval-2023 Task 12: Sentiment Analysis for African Languages (AfriSenti-SemEval)
Shamsuddeen Hassan Muhammad
Idris Abdulmumin
Seid Muhie Yimam
Ibrahim Ahmad
Nedjma OUSIDHOUM
Abinew Ayele
Saif Mohammad
Meriem Beloucif
Structure-aware protein self-supervised learning
Can Chen
Jingbo Zhou
Fan Wang
Dejing Dou
Adaptive patch foraging in deep reinforcement learning agents
Nathan Wispinski
Andrew Butcher
Craig S Chapman
Matthew Botvinick
Patrick M. Pilarski
Patch foraging is one of the most heavily studied behavioral optimization challenges in biology. However, despite its importance to biologic… (see more)al intelligence, this behavioral optimization problem is understudied in artificial intelligence research. Patch foraging is especially amenable to study given that it has a known optimal solution, which may be difficult to discover given current techniques in deep reinforcement learning. Here, we investigate deep reinforcement learning agents in an ecological patch foraging task. For the first time, we show that machine learning agents can learn to patch forage adaptively in patterns similar to biological foragers, and approach optimal patch foraging behavior when accounting for temporal discounting. Finally, we show emergent internal dynamics in these agents that resemble single-cell recordings from foraging non-human primates, which complements experimental and theoretical work on the neural mechanisms of biological foraging. This work suggests that agents interacting in complex environments with ecologically valid pressures arrive at common solutions, suggesting the emergence of foundational computations behind adaptive, intelligent behavior in both biological and artificial agents.
Autonomous optimization of neuroprosthetic stimulation parameters that drive the motor cortex and spinal cord outputs in rats and monkeys
Rose Guay Hottin
Sandrine L. Côté
Elena Massai
Léo Choinière
Uzay Macar
Samuel Laferrière
Parikshat Sirpal
Stephan Quessy
Marina Martinez
Numa Dancause
A Novel Stochastic Gradient Descent Algorithm for LearningPrincipal Subspaces
Charline Le Lan
Joshua Greaves
Jesse Farebrother
Mark Rowland
Fabian Pedregosa
Rishabh Agarwal
In this paper, we derive an algorithm that learns a principal subspace from sample entries, can be applied when the approximate subspace i… (see more)s represented by a neural network, and hence can bescaled to datasets with an effectively infinite number of rows and columns. Our method consistsin defining a loss function whose minimizer is the desired principal subspace, and constructing agradient estimate of this loss whose bias can be controlled.
A surprisingly simple technique to control the pretraining bias for better transfer: Expand or Narrow your representation
Florian Bordes
Samuel Lavoie
Randall Balestriero
Nicolas Ballas
Conservative objective models are a special kind of contrastive divergence-based energy model
Christopher Beckham
In this work we theoretically show that conservative objective models (COMs) for offline model-based optimisation (MBO) are a special kind o… (see more)f contrastive divergence-based energy model, one where the energy function represents both the unconditional probability of the input and the conditional probability of the reward variable. While the initial formulation only samples modes from its learned distribution, we propose a simple fix that replaces its gradient ascent sampler with a Langevin MCMC sampler. This gives rise to a special probabilistic model where the probability of sampling an input is proportional to its predicted reward. Lastly, we show that better samples can be obtained if the model is decoupled so that the unconditional and conditional probabilities are modelled separately.