Portrait of Razvan Pascanu

Razvan Pascanu

Affiliate Member
Senior Research Scientist, Google DeepMind
Research Topics
Continual Learning
Deep Learning
Deep Neural Networks
Few-Shot Learning
Generalization
Geometric Deep Learning
Graph Neural Networks
Lifelong Learning
Machine Learning Theory
Mechanistic Interpretability
Neural Networks
Optimization
Recurrent Neural Networks
Reinforcement Learning
Representation Learning

Publications

Plasticity as the Mirror of Empowerment
David Abel
Michael Bowling
Andre Barreto
Will Dabney
Shi Dong
Steven Hansen
Anna Harutyunyan
Clare Lyle
Georgios Piliouras
Jonathan Richens
Mark Rowland
Tom Schaul
Satinder Singh
Plasticity as the Mirror of Empowerment
David Abel
Michael Bowling
Andre Barreto
Will Dabney
Shi Dong
Steven Hansen
Anna Harutyunyan
Clare Lyle
Georgios Piliouras
Jonathan Richens
Mark Rowland
Tom Schaul
Satinder Singh
On the generalization of language models from in-context learning and finetuning: a controlled study
Andrew Lampinen
Arslan Chaudhry
Stephanie C.Y. Chan
Cody Wild
Diane Wan
Alex Ku
Jorg Bornschein
Murray Shanahan
James L McClelland
LLMs are Greedy Agents: Effects of RL Fine-tuning on Decision-Making Abilities
Thomas Schmied
Jorg Bornschein
Jordi Grau-Moya
Markus Wulfmeier
Why do LLMs attend to the first token?
Federico Barbero
'Alvaro Arroyo
Xiangming Gu
Christos Perivolaropoulos
Michael M. Bronstein
Petar Velivckovi 'c
LLMs are Greedy Agents: Effects of RL Fine-tuning on Decision-Making Abilities
Thomas Schmied
Jorg Bornschein
Jordi Grau-Moya
Markus Wulfmeier
Why do LLMs attend to the first token?
Federico Barbero
'Alvaro Arroyo
Xiangming Gu
Christos Perivolaropoulos
Michael M. Bronstein
Petar Veličković
NoProp: Training Neural Networks without Back-propagation or Forward-propagation
Qinyu Li
Yee Whye Teh
NoProp: Training Neural Networks without Back-propagation or Forward-propagation
Qinyu Li
Yee Whye Teh
How do language models learn facts? Dynamics, curricula and hallucinations
Nicolas Zucchet
Jorg Bornschein
Stephanie Chan
Andrew Lampinen
Soham De
How do language models learn facts? Dynamics, curricula and hallucinations
Nicolas Zucchet
Jorg Bornschein
Stephanie Chan
Andrew Lampinen
Soham De
From Markov to Laplace: How Mamba In-Context Learns Markov Chains
Marco Bondaschi
Nived Rajaraman
Xiuying Wei
Kannan Ramchandran
Caglar Gulcehre
Michael C. Gastpar
Ashok Vardhan Makkuva