Portrait of Eilif Benjamin Muller

Eilif Benjamin Muller

Associate Academic Member
Canada CIFAR AI Chair
Assistant Professor, Université de Montréal, Department of Neurosciences
Principal investigator, Architectures of Biological Learning Lab (ABL-Lab), CHU Ste-Justine - Research Center
Research Topics
Computational Neuroscience
Computer Vision
Deep Learning
Dynamical Systems
Generative Models
Online Learning
Recurrent Neural Networks
Representation Learning

Biography

Eilif B. Muller is a neuroscientist and AI researcher who uses computational and mathematical approaches to study the biological and algorithmic mechanisms of learning in the mammalian neocortex. Muller obtained his BSc in mathematical physics (2001) from Simon Fraser University, and his MSc (2003) and PhD (2007) in physics, with a focus on computational neuroscience, from the Ruprecht Karl University of Heidelberg, Germany’s oldest university. His postdoctoral work (2007–2010) with Wulfram Gerstner at the Laboratory for Computational Neuroscience of the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland, focused on network dynamics, simulation technology and plasticity.

From 2011 to 2019, he led the research team at the Institute’s Blue Brain Project, which pioneered in-silico neuroscience, a new era of data-driven brain tissue simulation. In 2015, Muller and his colleagues published their landmark team-science study “Reconstruction and Simulation of Neocortical Microcircuitry” in Cell. According to Christof Koch, President and CSO of the Allen Institute for Brain Science, this new approach represents “the most complete simulation of a piece of excitable brain matter to date.” This work also enabled Muller and his team to make significant contributions to our understanding of the structure, dynamics and plasticity of the neocortex, resulting in publications in top journals, such as Nature Neuroscience, Nature Communications and Cerebral Cortex.

In 2019, Muller moved to Montréal, attracted by the thriving Neuro-AI research community. He initially served as a senior researcher at Element AI, prior to being appointed to the Université de Montréal and Centre Hospitalier Universitaire (CHU) Sainte-Justine to launch the Architectures of Biological Learning Lab.

Current Students

PhD - Université de Montréal
Co-supervisor :
PhD - Université de Montréal
PhD - McGill University
Co-supervisor :
Postdoctorate - Université de Montréal
PhD - Université de Montréal
Co-supervisor :
PhD - Université de Montréal
Principal supervisor :
Master's Research - Université de Montréal

Publications

Learning to combine top-down context and feed-forward representations under ambiguity with apical and basal dendrites
Seq-JEPA: Autoregressive Predictive Learning of Invariant-Equivariant World Models
Joint-embedding predictive architecture (JEPA) is a self-supervised learning (SSL) paradigm with the capacity of world modeling via action-c… (see more)onditioned prediction. Previously, JEPA world models have been shown to learn action-invariant or action-equivariant representations by predicting one view of an image from another. Unlike JEPA and similar SSL paradigms, animals, including humans, learn to recognize new objects through a sequence of active interactions. To introduce \emph{sequential} interactions, we propose \textit{seq-JEPA}, a novel SSL world model equipped with an autoregressive memory module. Seq-JEPA aggregates a sequence of action-conditioned observations to produce a global representation of them. This global representation, conditioned on the next action, is used to predict the latent representation of the next observation. We empirically show the advantages of this sequence of action-conditioned observations and examine our sequential modeling paradigm in two settings: (1) \emph{predictive learning across saccades}; a method inspired by the role of eye movements in embodied vision. This approach learns self-supervised image representations by processing a sequence of low-resolution visual patches sampled from image saliencies, without any hand-crafted data augmentations. (2) \emph{invariance-equivariance trade-off}; seq-JEPA's architecture results in automatic separation of invariant and equivariant representations, with the aggregated autoregressor outputs being mostly action-invariant and the encoder output being equivariant. This is in contrast with many equivariant SSL methods that expect a single representational space to contain both invariant and equivariant features, potentially creating a trade-off between the two. Empirically, seq-JEPA achieves competitive performance on both invariance and equivariance-related benchmarks compared to existing methods. Importantly, both invariance and equivariance-related downstream performances increase as the number of available observations increases.