Portrait of Guillaume Dumas

Guillaume Dumas

Associate Academic Member
Associate Professor, Université de Montréal, Department of Psychiatry and Addiction
Adjunct Professor, McGill University, Department of Psychiatry
Research Topics
Computational Biology
Computational Neuroscience
Deep Learning
Dynamical Systems
Machine Learning Theory
Medical Machine Learning
Reinforcement Learning

Biography

Guillaume Dumas is an associate professor of computational psychiatry in the Faculty of Medicine, Université de Montréal, and principal investigator in the Precision Psychiatry and Social Physiology laboratory at the Centre hospitalier universitaire (CHU) Sainte-Justine Research Centre. He holds the IVADO professorship for AI in Mental Health, and the Fonds de recherche du Québec - Santé (FRQS) J1 in AI and Digital Health. In 2023, Dumas was recognized as a CIFAR Azrieli Global Scholar – Brain, Mind, and Consciousness program, and nominated as a Future Leader in Canadian Brain Research by the Brain Canada Foundation.

Dumas was previously a permanent researcher in neuroscience and computational biology at the Institut Pasteur (Paris). Before that, he was a postdoctoral fellow at the Center for Complex Systems and Brain Sciences (Florida Atlanta University). He holds an engineering degree in advanced engineering and computer science (École Centrale Paris), two MSc degrees (theoretical physics, Paris-Saclay University; cognitive science, ENS/EHESS/Paris 5), and a PhD in cognitive neuroscience (Sorbonne University).

The goal of his research is to cross-fertilize AI/ML, cognitive neuroscience and digital medicine through an interdisciplinary program with two main axes:

- AI/ML for Mental Health, which aims to create new algorithms to investigate the development of human cognitive architecture and deliver personalized medicine in neuropsychiatry using data from genomes to smartphones.

- Social Neuroscience for AI/ML, which translates basic brain research and dynamical systems formalism into neurocomputational and machine learning hybrid models (NeuroML) and machines with social learning abilities (Social NeuroAI & HMI).

Current Students

Independent visiting researcher - Université de Montréal
Principal supervisor :
Master's Research - Université de Montréal
Master's Research - Université de Montréal
Principal supervisor :
Postdoctorate - Université de Montréal
Co-supervisor :
PhD - Université de Montréal
Principal supervisor :

Publications

Scalable Approaches for a Theory of Many Minds
Maximilian Puelma Touzel
Amin Memarian
Matthew D Riemer
Andrei Mircea
Andrew Robert Williams
Elin Ahlstrand
Lucas Lehnert
Rupali Bhati
A major challenge as we move towards building agents for real-world problems, which could involve a massive number of human and/or machine a… (see more)gents, is that we must learn to reason about the behavior of these many other agents. In this paper, we consider the problem of scaling a predictive Theory of Mind (ToM) model to a very large number of interacting agents with a fixed computational budget. Motivated by the limited diversity of agent types, existing approaches to scalable TOM learn versatile single-agent representations for quickly adapting to new agents encountered sequentially. We consider the more general setting that many agents are observed in parallel and formulate the corresponding Theory of Many Minds (ToMM) problem of estimating the joint policy. We frame the scaling behavior of solutions in terms of parameter sharing schemes and in particular propose two parameter-free architectural features that endow models with the ability to exploit action correlations: encoding a multi-agent context, and decoding through an abstracted joint action space. The increased predictive capabilities that have come with foundation models have made it easier to imagine the possibility of using these models to make simulations that imitate the behavior of many agents within complex real-world systems. Being able to perform these simulations in a general-purpose way would not only help make more capable agents, it also would be a very useful capability for applications in social science, political science, and economics.
Lost in Translation: The Algorithmic Gap Between LMs and the Brain
Tosato Tommaso
Tikeng Notsawo Pascal Junior
Helbling Saskia
Language Models (LMs) have achieved impressive performance on various linguistic tasks, but their relationship to human language processing … (see more)in the brain remains unclear. This paper examines the gaps and overlaps between LMs and the brain at different levels of analysis, emphasizing the importance of looking beyond input-output behavior to examine and compare the internal processes of these systems. We discuss how insights from neuroscience, such as sparsity, modularity, internal states, and interactive learning, can inform the development of more biologically plausible language models. Furthermore, we explore the role of scaling laws in bridging the gap between LMs and human cognition, highlighting the need for efficiency constraints analogous to those in biological systems. By developing LMs that more closely mimic brain function, we aim to advance both artificial intelligence and our understanding of human cognition.
295. Rare Variant Genetic Architecture of the Human Cortical MRI Phenotypes in General Population
Kuldeep Kumar
Sayeh Kazem
Zhijie Liao
Jakub Kopal
Guillaume Huguet
Thomas Renne
Martineau Jean-Louis
Zhe Xie
Zohra Saci
Laura Almasy
David C. Glahn
Tomas Paus
Carrie Bearden
Paul Thompson
Richard A.I. Bethlehem
Varun Warrier
Sébastien Jacquemont
Effects of gene dosage on cognitive ability: A function-based association study across brain and non-brain processes
Guillaume Huguet
Thomas Renne
Cécile Poulain
Alma Dubuc
Kuldeep Kumar
Sayeh Kazem
Worrawat Engchuan
Omar Shanta
Elise Douard
Catherine Proulx
Martineau Jean-Louis
Zohra Saci
Josephine Mollon
Laura Schultz
Emma E M Knowles
Simon R. Cox
David Porteous
Gail Davies
Paul Redmond
Sarah E. Harris … (see 10 more)
Gunter Schumann
Aurélie Labbe
Zdenka Pausova
Tomas Paus
Stephen W Scherer
Jonathan Sebat
Laura Almasy
David C. Glahn
Sébastien Jacquemont
Predicting Grokking Long Before it Happens: A look into the loss landscape of models which grok
Tikeng Notsawo Pascal Junior
Pascal Notsawo
Hattie Zhou
Mohammad Pezeshki
Sources of richness and ineffability for phenomenally conscious states
Xu Ji
Eric Elmoznino
George Deane
Axel Constant
Jonathan Simon
The « jingle-jangle fallacy » of empathy: Delineating affective, cognitive and motor components of empathy from behavioral synchrony using a virtual agent
Julia Ayache
Alexander Sumich
D. Kuss
Darren Rhodes
Nadja Heym
Effective Latent Differential Equation Models via Attention and Multiple Shooting
Germán Abrevaya
Mahta Ramezanian-Panahi
Jean-Christophe Gagnon-Audet
Pablo Polosecki
Silvina Ponce Dawson
Guillermo Cecchi
From physics to sentience: Deciphering the semantics of the free-energy principle and evaluating its claims: Comment on "Path integrals, particular kinds, and strange things" by Karl Friston et al.
Zahra Sheikhbahaee
Adam Safron
Casper Hesp
Exploring the multidimensional nature of repetitive and restricted behaviors and interests (RRBI) in autism: neuroanatomical correlates and clinical implications
Aline Lefebvre
Nicolas Traut
Amandine Pedoux
Anna Maruani
Anita Beggiato
Monique Elmaleh
David Germanaud
Anouck Amestoy
Myriam Ly‐Le Moal
Christopher H. Chatham
Lorraine Murtagh
Manuel Bouvard
Marianne Alisson
Marion Leboyer
Thomas Bourgeron
Roberto Toro
Clara A. Moreau
Richard Delorme
Interoceptive technologies for psychiatric interventions: From diagnosis to clinical applications
Felix Schoeller
Adam Haar Horowitz
Abhinandan Jain
Pattie Maes
Nicco Reggente
Leonardo Christov-Moore
Giovanni Pezzulo
Laura Barca
Micah Allen
Roy Salomon
Mark Miller
Daniele Di Lernia
Giuseppe Riva
Manos Tsakiris
Moussa A. Chalah
Arno Klein
Ben Zhang
Teresa Garcia
Ursula Pollack
Marion Trousselard … (see 4 more)
Charles Verdonk
Vladimir Adrien
Karl J. Friston
Attention Schema in Neural Agents
Dianbo Liu
Samuele Bolotta
Mike He Zhu
Zahra Sheikhbahaee
Attention has become a common ingredient in deep learning architectures. It adds a dynamical selection of information on top of the static s… (see more)election of information supported by weights. In the same way, we can imagine a higher-order informational filter built on top of attention: an Attention Schema (AS), namely, a descriptive and predictive model of attention. In cognitive neuroscience, Attention Schema Theory (AST) supports this idea of distinguishing attention from AS. A strong prediction of this theory is that an agent can use its own AS to also infer the states of other agents' attention and consequently enhance coordination with other agents. As such, multi-agent reinforcement learning would be an ideal setting to experimentally test the validity of AST. We explore different ways in which attention and AS interact with each other. Our preliminary results indicate that agents that implement the AS as a recurrent internal control achieve the best performance. In general, these exploratory experiments suggest that equipping artificial agents with a model of attention can enhance their social intelligence.