Portrait of Maxime Daigle is unavailable

Maxime Daigle

Master's Research - McGill University
Supervisor
Research Topics
Brain-inspired AI
Computational Neuroscience
Deep Learning
Memory
NeuroAI
Neuroscience
Neurotechnology
Reinforcement Learning

Publications

Spatially and non-spatially tuned hippocampal neurons are linear perceptual and nonlinear memory encoders
Kaicheng Yan
Benjamin Corrigan
Roberto Gulli
Julio Martinez-Trujillo
The hippocampus has long been regarded as a neural map of physical space, with its neurons categorized as spatially or non-spatially tuned a… (see more)ccording to their response selectivity. However, growing evidence suggests that this dichotomy oversimplifies the complex roles hippocampal neurons play in integrating spatial and non-spatial information. Through computational modeling and in-vivo electrophysiology in macaques, we show that neurons classified as spatially tuned primarily encode linear combinations of immediate behaviorally relevant factors, while those labeled as non-spatially tuned rely on nonlinear mechanisms to integrate temporally distant experiences. Furthermore, our findings reveal a temporal gradient in the primate CA3 region, where spatial selectivity diminishes as neurons encode increasingly distant past events. Finally, using artificial neural networks, we demonstrate that nonlinear recurrent connections are crucial for capturing the response dynamics of non-spatially tuned neurons, particularly those encoding memory-related information. These findings challenge the traditional dichotomy of spatial versus non-spatial representations and instead suggest a continuum of linear and nonlinear computations that underpin hippocampal function. This framework provides new insights into how the hippocampus bridges perception and memory, informing on its role in episodic memory, spatial navigation, and associative learning.
Building spatial world models from sparse transitional episodic memories
Many animals possess a remarkable capacity to rapidly construct flexible mental models of their environments. These world models are crucial… (see more) for ethologically relevant behaviors such as navigation, exploration, and planning. The ability to form episodic memories and make inferences based on these sparse experiences is believed to underpin the efficiency and adaptability of these models in the brain. Here, we ask: Can a neural network learn to construct a spatial model of its surroundings from sparse and disjoint episodic memories? We formulate the problem in a simulated world and propose a novel framework, the Episodic Spatial World Model (ESWM), as a potential answer. We show that ESWM is highly sample-efficient, requiring minimal observations to construct a robust representation of the environment. It is also inherently adaptive, allowing for rapid updates when the environment changes. In addition, we demonstrate that ESWM readily enables near-optimal strategies for exploring novel environments and navigating between arbitrary points, all without the need for additional training.