Portrait de Eilif Benjamin Muller

Eilif Benjamin Muller

Membre académique associé
Chaire en IA Canada-CIFAR
Professeur adjoint, Université de Montréal, Département de neurosciences
Chercheur principal, Laboratoire des architectures de l’apprentissage biologique (ABL-Lab), CHU Ste-Justine - Research Center
Sujets de recherche
Apprentissage de représentations
Apprentissage en ligne
Apprentissage profond
Modèles génératifs
Neurosciences computationnelles
Réseaux de neurones récurrents
Systèmes dynamiques
Vision par ordinateur

Biographie

Eilif B. Muller est un neuroscientifique et un chercheur en intelligence artificielle. Il utilise des approches informatiques et mathématiques pour étudier les mécanismes biologiques et algorithmiques de l'apprentissage dans le néocortex des mammifères. Il a obtenu un baccalauréat (2001) en physique mathématique de l'Université Simon Fraser, ainsi qu’une maîtrise (2003) et un doctorat en sciences naturelles (2007) en physique avec une spécialisation en neurosciences computationnelles de l'Université Ruprecht Karl de Heidelberg, l’université la plus ancienne d'Allemagne. Eilif B. Muller a entrepris son travail postdoctoral (2007-2010) au Laboratoire de neurosciences computationnelles de l’EPFL (Suisse) avec le professeur Wulfram Gerstner, en se concentrant sur la dynamique des réseaux, la technologie de simulation et la plasticité.

Par la suite, il a dirigé (2011-2019) l'équipe de recherche du Blue Brain Project, à l'EPFL, qui a ouvert la voie aux neurosciences in silico, une simulation inédite des tissus cérébraux basée sur les données. En 2015, Eilif B. Muller et ses collègues ont publié l’étude phare « Reconstruction and Simulation of Neocortical Microcircuitry » dans la revue Cell, décrivant « la simulation la plus complète d'un morceau de matière cérébrale excitable à ce jour », selon Christof Koch (président et directeur scientifique de l'Allen Institute for Brain Science). Cette approche lui a permis ainsi qu’à son équipe de contribuer de manière significative à la compréhension de la structure, de la dynamique et de la plasticité du néocortex, ce qui a donné lieu à des publications dans des revues de premier plan telles que Nature Neuroscience, Nature Communications et Cerebral Cortex.

En 2019, Eilif B. Muller a déménagé à Montréal, attiré par la communauté de recherche en neuro-IA florissante. Il y a d’abord été chercheur principal chez Element AI, avant sa nomination à l'Université de Montréal et au CHU Sainte-Justine, où il a lancé le Laboratoire des architectures d’apprentissage biologique (ABL-Lab).

Étudiants actuels

Doctorat - McGill
Co-superviseur⋅e :
Doctorat - UdeM
Co-superviseur⋅e :
Doctorat - UdeM
Superviseur⋅e principal⋅e :
Maîtrise recherche - UdeM

Publications

Assemblies, synapse clustering, and network topology interact with plasticity to explain structure-function relationships of the cortical connectome
András Ecker
Daniela Egas Santander
Marwan Abdellah
Jorge Blanco Alonso
Sirio Bolaños-Puchet
Giuseppe Chindemi
James B. Isbister
James King
Pramod Kumbhar
Ioannis Magkanaris
Michael W. Reimann
Synaptic plasticity underlies the brain’s ability to learn and adapt. While experiments in brain slices have revealed mechanisms and proto… (voir plus)cols for the induction of plasticity between pairs of neurons, how these synaptic changes are coordinated in biological neuronal networks to ensure the emergence of learning remains poorly understood. Simulation and modeling have emerged as important tools to study learning in plastic networks, but have yet to achieve a scale that incorporates realistic network structure, active dendrites, and multi-synapse interactions, key determinants of synaptic plasticity. To rise to this challenge, we endowed an existing large-scale cortical network model, incorporating data-constrained dendritic processing and multi-synaptic connections, with a calcium-based model of functional plasticity that captures the diversity of excitatory connections extrapolated to in vivo-like conditions. This allowed us to study how dendrites and network structure interact with plasticity to shape stimulus representations at the microcircuit level. In our exploratory simulations, plasticity acted sparsely and specifically, firing rates and weight distributions remained stable without additional homeostatic mechanisms. At the circuit level, we found plasticity was driven by co-firing stimulus-evoked functional assemblies, spatial clustering of synapses on dendrites, and the topology of the network connectivity. As a result of the plastic changes, the network became more reliable with more stimulus-specific responses. We confirmed our testable predictions in the MICrONS datasets, an openly available electron microscopic reconstruction of a large volume of cortical tissue. Our results quantify at a large scale how the dendritic architecture and higher-order structure of cortical microcircuits play a central role in functional plasticity and provide a foundation for elucidating their role in learning.
Learning to combine top-down context and feed-forward representations under ambiguity with apical and basal dendrites
Top-down feedback matters: Functional impact of brainlike connectivity motifs on audiovisual integration
Artificial neural networks (ANNs) are an important tool for studying neural computation, but many features of the brain are not captured by … (voir plus)standard ANN architectures. One notable missing feature in most ANN models is top-down feedback, i.e. projections from higher-order layers to lower-order layers in the network. Top-down feedback is ubiquitous in the brain, and it has a unique modulatory impact on activity in neocortical pyramidal neurons. However, we still do not understand its computational role. Here we develop a deep neural network model that captures the core functional properties of top-down feedback in the neocortex, allowing us to construct hierarchical recurrent ANN models that more closely reflect the architecture of the brain. We use this to explore the impact of different hierarchical recurrent architectures on an audiovisual integration task. We find that certain hierarchies, namely those that mimic the architecture of the human brain, impart ANN models with a light visual bias similar to that seen in humans. This bias does not impair performance on the audiovisual tasks. The results further suggest that different configurations of top-down feedback make otherwise identically connected models functionally distinct from each other, and from traditional feedforward-only models. Altogether our findings demonstrate that modulatory top-down feedback is a computationally relevant feature of biological brains, and that incorporating it into ANNs can affect their behavior and helps to determine the solutions that the network can discover.
Modeling and Simulation of Neocortical Micro- and Mesocircuitry. Part II: Physiology and Experimentation
James B. Isbister
András Ecker
Christoph Pokorny
Sirio Bolaños-Puchet
Daniela Egas Santander
Alexis Arnaudon
Omar Awile
Natali Barros-Zulaica
Jorge Blanco Alonso
Elvis Boci
Giuseppe Chindemi
Jean-Denis Courcol
Tanguy Damart
Thomas Delemontex
Alexander Dietz
Gianluca Ficarelli
Michael Gevaert
Joni Herttuainen
Genrich Ivaska
Weina Ji … (voir 22 de plus)
Daniel Keller
James King
Pramod Kumbhar
Samuel Lapere
Polina Litvak
Darshan Mandge
Fernando Pereira
Judit Planas
Rajnish Ranjan
Maria Reva
Armando Romani
Christian Rössert
Felix Schürmann
Vishal Sood
Aleksandra Teska
Anil Tuncel
Werner Van Geit
Matthias Wolf
Henry Markram
Srikanth Ramaswamy
Michael W. Reimann
Cortical dynamics underlie many cognitive processes and emerge from complex multi-scale interactions, which are challenging to study in vivo… (voir plus). Large-scale, biophysically detailed models offer a tool which can complement laboratory approaches. We present a model comprising eight somatosensory cortex subregions, 4.2 million morphological and electrically-detailed neurons, and 13.2 billion local and mid-range synapses. In silico tools enabled reproduction and extension of complex laboratory experiments under a single parameterization, providing strong validation. The model reproduced millisecond-precise stimulus-responses, stimulus-encoding under targeted optogenetic activation, and selective propagation of stimulus-evoked activity to downstream areas. The model’s direct correspondence with biology generated predictions about how multiscale organization shapes activity; for example, how cortical activity is shaped by high-dimensional connectivity motifs in local and mid-range connectivity, and spatial targeting rules by inhibitory subpopulations. The latter was facilitated using a rewired connectome which included specific targeting rules observed for different inhibitory neuron types in electron microscopy. The model also predicted the role of inhibitory interneuron types and different layers in stimulus encoding. Simulation tools and a large subvolume of the model are made available to enable further community-driven improvement, validation and investigation.
Specific inhibition and disinhibition in the higher-order structure of a cortical connectome
Michael W. Reimann
Daniela Egas Santander
András Ecker
Neuronal network activity is thought to be structured around the activation of assemblies, or low-dimensional manifolds describing states of… (voir plus) activity. Both views describe neurons acting not independently, but in concert, likely facilitated by strong recurrent excitation between them. The role of inhibition in these frameworks – if considered at all – is often reduced to blanket inhibition with no specificity with respect to which excitatory neurons are targeted. We analyzed the structure of excitation and inhibition in the MICrONS 1mm3 dataset, an electron microscopic reconstruction of a piece of cortical tissue. We found that excitation was structured around a feed-forward flow in non-random motifs of seven or more neurons. This revealed a structure of information flow from a small number of sources to a larger number of potential targets that became only visible when larger motifs were considered instead of individual pairs. Inhibitory neurons targeted and were targeted by neurons in specific sequential positions of these motifs. Additionally, disynaptic inhibition was strongest between target motifs excited by the same group of source neurons, implying competition between them. The structure of this inhibition was also highly specific and symmetrical, contradicting the idea of non-specific blanket inhibition. None of these trends are detectable in only pairwise connectivity, demonstrating that inhibition is specifically structured by these large motifs. Further, we found that these motifs represent higher order connectivity patterns which are present, but to a lesser extent in a recently released, detailed computational model, and not at all in a distance-dependent control. These findings have important implications for how synaptic plasticity reorganizes neocortical connectivity to implement learning and for the specific role of inhibition in this process.
Community-based reconstruction and simulation of a full-scale model of the rat hippocampus CA1 region
Armando Romani
Alberto Antonietti
Davide Bella
Julian Budd
Elisabetta Giacalone
Kerem Kurban
Sára Sáray
Marwan Abdellah
Alexis Arnaudon
Elvis Boci
Cristina Colangelo
Jean-Denis Courcol
Thomas Delemontex
András Ecker
Joanne Falck
Cyrille Favreau
Michael Gevaert
Juan B. Hernando
Joni Herttuainen
Genrich Ivaska … (voir 28 de plus)
Lida Kanari
Anna-Kristin Kaufmann
James King
Pramod Kumbhar
Sigrun Lange
Huanxiang Lu
Carmen Alina Lupascu
Rosanna Migliore
Fabien Petitjean
Judit Planas
Pranav Rai
Srikanth Ramaswamy
Michael W. Reimann
Juan Luis Riquelme
Nadir Román Guerrero
Ying Shi
Vishal Sood
Mohameth François Sy
Werner Van Geit
Liesbeth Vanherpe
Tamás F. Freund
Audrey Mercer
Felix Schürmann
Alex M. Thomson
Michele Migliore
Szabolcs Káli
Henry Markram
The CA1 region of the hippocampus is one of the most studied regions of the rodent brain, thought to play an important role in cognitive fun… (voir plus)ctions such as memory and spatial navigation. Despite a wealth of experimental data on its structure and function, it has been challenging to integrate information obtained from diverse experimental approaches. To address this challenge, we present a community-based, full-scale in silico model of the rat CA1 that integrates a broad range of experimental data, from synapse to network, including the reconstruction of its principal afferents, the Schaffer collaterals, and a model of the effects that acetylcholine has on the system. We tested and validated each model component and the final network model, and made input data, assumptions, and strategies explicit and transparent. The unique flexibility of the model allows scientists to potentially address a range of scientific questions. In this article, we describe the methods used to set up simulations to reproduce in vitro and in vivo experiments. Among several applications in the article, we focus on theta rhythm, a prominent hippocampal oscillation associated with various behavioral correlates and use our computer model to reproduce experimental findings. Finally, we make data, code, and model available through the hippocampushub.eu portal, which also provides an extensive set of analyses of the model and a user-friendly interface to facilitate adoption and usage. This community-based model represents a valuable tool for integrating diverse experimental data and provides a foundation for further research into the complex workings of the hippocampal CA1 region.
Community-based reconstruction and simulation of a full-scale model of the rat hippocampus CA1 region
Armando Romani
A. Antonietti
Davide Bella
Julian Budd
Elisabetta Giacalone
Kerem Kurban
Sára Sáray
Marwan Abdellah
Alexis Arnaudon
Elvis Boci
Cristina Colangelo
Jean-Denis Courcol
Thomas Delemontex
András Ecker
Joanne Falck
Cyrille Favreau
Michael Gevaert
Juan B. Hernando
Joni Herttuainen
Genrich Ivaska … (voir 28 de plus)
Lida Kanari
Anna-Kristin Kaufmann
James King
Pramod Kumbhar
Sigrun Lange
Huanxiang Lu
Carmen Alina Lupascu
Rosanna Migliore
Fabien Petitjean
Judit Planas
Pranav Rai
Srikanth Ramaswamy
Michael W. Reimann
Juan Luis Riquelme
Nadir Román Guerrero
Ying Shi
Vishal Sood
Mohameth François Sy
Werner Van Geit
Liesbeth Vanherpe
Tamás F. Freund
Audrey Mercer
Felix Schürmann
Alex M. Thomson
Michele Migliore
Szabolcs Káli
Henry Markram
The CA1 region of the hippocampus is one of the most studied regions of the rodent brain, thought to play an important role in cognitive fun… (voir plus)ctions such as memory and spatial navigation. Despite a wealth of experimental data on its structure and function, it has been challenging to integrate information obtained from diverse experimental approaches. To address this challenge, we present a community-based, full-scale in silico model of the rat CA1 that integrates a broad range of experimental data, from synapse to network, including the reconstruction of its principal afferents, the Schaffer collaterals, and a model of the effects that acetylcholine has on the system. We tested and validated each model component and the final network model, and made input data, assumptions, and strategies explicit and transparent. The unique flexibility of the model allows scientists to potentially address a range of scientific questions. In this article, we describe the methods used to set up simulations to reproduce in vitro and in vivo experiments. Among several applications in the article, we focus on theta rhythm, a prominent hippocampal oscillation associated with various behavioral correlates and use our computer model to reproduce experimental findings. Finally, we make data, code, and model available through the hippocampushub.eu portal, which also provides an extensive set of analyses of the model and a user-friendly interface to facilitate adoption and usage. This community-based model represents a valuable tool for integrating diverse experimental data and provides a foundation for further research into the complex workings of the hippocampal CA1 region.
GitChameleon: Unmasking the Version-Switching Capabilities of Code Generation Models
Justine Gehring
Terry Yue Zhuo
Massimo Caccia
Seq-JEPA: Autoregressive Predictive Learning of Invariant-Equivariant World Models
Joint-embedding predictive architecture (JEPA) is a self-supervised learning (SSL) paradigm with the capacity of world modeling via action-c… (voir plus)onditioned prediction. Previously, JEPA world models have been shown to learn action-invariant or action-equivariant representations by predicting one view of an image from another. Unlike JEPA and similar SSL paradigms, animals, including humans, learn to recognize new objects through a sequence of active interactions. To introduce \emph{sequential} interactions, we propose \textit{seq-JEPA}, a novel SSL world model equipped with an autoregressive memory module. Seq-JEPA aggregates a sequence of action-conditioned observations to produce a global representation of them. This global representation, conditioned on the next action, is used to predict the latent representation of the next observation. We empirically show the advantages of this sequence of action-conditioned observations and examine our sequential modeling paradigm in two settings: (1) \emph{predictive learning across saccades}; a method inspired by the role of eye movements in embodied vision. This approach learns self-supervised image representations by processing a sequence of low-resolution visual patches sampled from image saliencies, without any hand-crafted data augmentations. (2) \emph{invariance-equivariance trade-off}; seq-JEPA's architecture results in automatic separation of invariant and equivariant representations, with the aggregated autoregressor outputs being mostly action-invariant and the encoder output being equivariant. This is in contrast with many equivariant SSL methods that expect a single representational space to contain both invariant and equivariant features, potentially creating a trade-off between the two. Empirically, seq-JEPA achieves competitive performance on both invariance and equivariance-related benchmarks compared to existing methods. Importantly, both invariance and equivariance-related downstream performances increase as the number of available observations increases.
Seq-JEPA: Autoregressive Predictive Learning of Invariant-Equivariant World Models
Joint-embedding predictive architecture (JEPA) is a self-supervised learning (SSL) paradigm with the capacity of world modeling via action-c… (voir plus)onditioned prediction. Previously, JEPA world models have been shown to learn action-invariant or action-equivariant representations by predicting one view of an image from another. Unlike JEPA and similar SSL paradigms, animals, including humans, learn to recognize new objects through a sequence of active interactions. To introduce \emph{sequential} interactions, we propose \textit{seq-JEPA}, a novel SSL world model equipped with an autoregressive memory module. Seq-JEPA aggregates a sequence of action-conditioned observations to produce a global representation of them. This global representation, conditioned on the next action, is used to predict the latent representation of the next observation. We empirically show the advantages of this sequence of action-conditioned observations and examine our sequential modeling paradigm in two settings: (1) \emph{predictive learning across saccades}; a method inspired by the role of eye movements in embodied vision. This approach learns self-supervised image representations by processing a sequence of low-resolution visual patches sampled from image saliencies, without any hand-crafted data augmentations. (2) \emph{invariance-equivariance trade-off}; seq-JEPA's architecture results in automatic separation of invariant and equivariant representations, with the aggregated autoregressor outputs being mostly action-invariant and the encoder output being equivariant. This is in contrast with many equivariant SSL methods that expect a single representational space to contain both invariant and equivariant features, potentially creating a trade-off between the two. Empirically, seq-JEPA achieves competitive performance on both invariance and equivariance-related benchmarks compared to existing methods. Importantly, both invariance and equivariance-related downstream performances increase as the number of available observations increases.
Top-down feedback matters: Functional impact of brainlike connectivity motifs on audiovisual integration
Long-term plasticity induces sparse and specific synaptic changes in a biophysically detailed cortical model
András Ecker
Daniela Egas Santander
Marwan Abdellah
Jorge Blanco Alonso
Sirio Bolaños-Puchet
Giuseppe Chindemi
James B. Isbister
James King
Pramod Kumbhar
Ioannis Magkanaris
Michael W. Reimann