Portrait de Taylor Webb n'est pas disponible

Taylor Webb

Membre académique associé
Professeur adjoint, Département de psychologie
Sujets de recherche
Apprentissage profond
Conscience
Raisonnement
Science cognitive
Vision par ordinateur

Étudiants actuels

Collaborateur·rice de recherche - University of Amsterdam
Collaborateur·rice alumni
Maîtrise recherche - UdeM
Collaborateur·rice de recherche - University of Amsterdam

Publications

A brain-inspired agentic architecture to improve planning with LLMs
Shanka Subhra Mondal
Ida Momennejad
Large language models (LLMs) demonstrate impressive performance on a wide variety of tasks, but they often struggle with tasks that require … (voir plus)multi-step reasoning or goal-directed planning. To address this, we take inspiration from the human brain, in which planning is accomplished via component processes that are predominantly associated with specific brain regions. These processes include conflict monitoring, state prediction, state evaluation, task decomposition, and task coordination. We find that LLMs are often capable of carrying out these functions in isolation, but struggle to autonomously coordinate them in the service of a goal. Therefore, we propose a modular agentic architecture - the Modular Agentic Planner (MAP) - in which planning is performed via the interaction of specialized brain-inspired LLM modules. We evaluate MAP on three challenging planning tasks – graph traversal, Tower of Hanoi, and the PlanBench benchmark – as well as an NLP task requiring multi-step reasoning (strategyQA). We find that MAP yields significant improvements over both standard LLM methods and competitive agentic baselines, can be effectively combined with smaller and more cost-efficient LLMs, and displays superior transfer across tasks. These results demonstrate the benefit of utilizing knowledge from cognitive neuroscience to improve planning in LLMs. Multi-step planning is a challenge for LLMs. Here, the authors introduce a brain-inspired Modular Agentic Planner that decomposes planning into specialized LLM modules, improving performance across tasks and highlighting the value of cognitive neuroscience for LLM design.
Visual serial processing deficits explain divergences in human and VLM reasoning
Nicholas Budny
Kia Ghods
Declan Campbell
Raja Marjieh
Amogh Joshi
Sreejan Kumar
Jonathan D. Cohen 0003
Thomas L. Griffiths
Why do Vision Language Models (VLMs), despite success on standard benchmarks, often fail to match human performance on surprisingly simple v… (voir plus)isual reasoning tasks? While the underlying computational principles are still debated, we hypothesize that a crucial factor is a deficit in visually-grounded serial processing. To test this hypothesis, we compared human and VLM performance across tasks designed to vary serial processing demands in three distinct domains: geometric reasoning, perceptual enumeration, and mental rotation. Tasks within each domain varied serial processing load by manipulating factors such as geometric concept complexity, perceptual individuation load, and transformation difficulty. Across all domains, our results revealed a consistent pattern: decreased VLM accuracy was strongly correlated with increased human reaction time (used as a proxy for serial processing load). As tasks require more demanding serial processing -- whether composing concepts, enumerating items, or performing mental transformations -- the VLM-human performance gap widens reliably. These findings support our hypothesis, indicating that limitations in serial, visually grounded reasoning represent a fundamental bottleneck that distinguishes current VLMs from humans.
Whither symbols in the era of advanced neural networks?
Thomas L. Griffiths
Brenden M. Lake
R. Thomas McCoy
Ellie Pavlick
Some of the strongest evidence that human minds should be thought about in terms of symbolic systems has been the way they combine ideas, pr… (voir plus)oduce novelty, and learn quickly. We argue that modern neural networks -- and the artificial intelligence systems built upon them -- exhibit similar abilities. This undermines the argument that the cognitive processes and representations used by human minds are symbolic, although the fact that these neural networks are typically trained on data generated by symbolic systems illustrates that such systems play an important role in characterizing the abstract problems that human minds have to solve. This argument leads us to offer a new agenda for research on the symbolic basis of human thought.
Visual symbolic mechanisms: Emergent symbol processing in vision language models
Declan Campbell
To accurately process a visual scene, observers must bind features together to represent individual objects. This capacity is necessary, for… (voir plus) instance, to distinguish an image containing a red square and a blue circle from an image containing a blue square and a red circle. Recent work has found that language models solve this'binding problem'via a set of symbol-like, content-independent indices, but it is unclear whether similar mechanisms are employed by vision language models (VLMs). This question is especially relevant, given the persistent failures of VLMs on tasks that require binding. Here, we identify a set of emergent symbolic mechanisms that support binding in VLMs via a content-independent, spatial indexing scheme. Moreover, we find that binding errors can be traced directly to failures in these mechanisms. Taken together, these results shed light on the mechanisms that support symbol-like processing in VLMs, and suggest possible avenues for addressing the persistent binding failures exhibited by these models.
What makes a theory of consciousness unscientific?
Derek H. Mark G. Tristan A. Yoshua James W. Jacob Dean D Arnold Baxter Bekinschtein Bengio Bisley Browning
Derek H. Arnold
Mark G. Baxter
Tristan A. Bekinschtein
James W. Bisley
Jacob Browning
Dean Buonomano
David Carmel
Marisa Carrasco
Peter Carruthers
Olivia Carter
Dorita H. F. Chang
Mouslim Cherkaoui
Axel Cleeremans
Michael A. Cohen
Philip R. Corlett
Kalina Christoff
Sam Cumming … (voir 84 de plus)
Cody A. Cushing
Beatrice de Gelder
Felipe De Brigard
Daniel C. Dennett
Nadine Dijkstra
Adrien Doerig
Paul E. Dux
Stephen M. Fleming
Keith Frankish
Chris D. Frith
Sarah Garfinkel
Melvyn A. Goodale
Jacqueline Gottlieb
Jake R. Hanson
Ran R. Hassin
Michael H. Herzog
Cecilia Heyes
Po-Jang Hsieh
Shao-Min Hung
Robert Kentridge
Tomas Knapen
Nikos Konstantinou
Konrad Kording
Timo L. Kvamme
Sze Chai Kwok
Renzo C. Lanfranco
Hakwan Lau
Joseph LeDoux
Alan L. F. Lee
Camilo Libedinsky
Matthew D. Lieberman
Ying-Tung Lin
Ka-Yuet Liu
Maro G. Machizawa
Julio Martinez-Trujillo
Janet Metcalfe
Matthias Michel
Kenneth D. Miller
Partha P. Mitra
Dean Mobbs
Robert M. Mok
Jorge Morales
Myrto Mylopoulos
Brian Odegaard
Charles C.-F. Or
Adrian M. Owen
David Pereplyotchik
Franco Pestilli
Megan A. K. Peters
Ian Phillips
Rosanne L. Rademaker
Dobromir Rahnev
Geraint Rees
Dario L. Ringach
Adina Roskies
Daniela Schiller
Aaron Schurger
D. Samuel Schwarzkopf
Ryan B. Scott
Aaron R. Seitz
Joshua Shepherd
Juha Silvanto
Heleen A. Slagter
Barry C. Smith
Guillermo Solovey
David Soto
Hugo Spiers
Timo Stein
Frank Tong
Peter U. Tse
Jonas Vibell
Sebastian Watzl
Josh Weisberg
Thalia Wheatley
Michael H. Herzog
Martijn E. Wokke
Hakwan Lau
Michał Klincewicz
Tony Cheng
Michael Schmitz
Miguel Ángel Sebastián
Joel S. Snyder