Publications

One hundred years of EEG for brain and behaviour research
Faisal Mushtaq
Dominik Welke
Anne Gallagher
Yuri G. Pavlov
Layla Kouara
Jorge Bosch-Bayard
Jasper JF van den Bosch
Mahnaz Arvaneh
Amy R. Bland
Maximilien Chaumon
Cornelius Borck
Xun He
Steven J. Luck
Maro G. Machizawa
Cyril Pernet
Aina Puce
Sidney J. Segalowitz
Christine Rogers
Muhammad Awais
Claudio Babiloni … (voir 75 de plus)
Neil W. Bailey
Sylvain Baillet
Robert C. A. Bendall
Daniel Brady
Maria L. Bringas-Vega
Niko A. Busch
Ana Calzada-Reyes
Armand Chatard
Peter E. Clayson
Michael X. Cohen
Jonathan Cole
Martin Constant
Alexandra Corneyllie
Damien Coyle
Damian Cruse
Ioannis Delis
Arnaud Delorme
Damien Fair
Tiago H. Falk
Matthias Gamer
Giorgio Ganis
Kilian Gloy
Samantha Gregory
Cameron D. Hassall
Katherine E. Hiley
Richard B. Ivry
Michael Jenkins
Jakob Kaiser
Andreas Keil
Robert T. Knight
Silvia Kochen
Boris Kotchoubey
Olave E. Krigolson
Nicolas Langer
Heinrich R. Liesefeld
Sarah Lippé
Raquel E. London
Annmarie MacNamara
Scott Makeig
Welber Marinovic
Eduardo Martínez-Montes
Aleya A. Marzuki
Ryan K. Mathew
Christoph Michel
José d. R. Millán
Mark Mon-Williams
Lilia Morales-Chacón
Richard Naar
Gustav Nilsonne
Guiomar Niso
Erika Nyhus
Robert Oostenveld
Katharina Paul
Walter Paulus
Daniela M. Pfabigan
Gilles Pourtois
Stefan Rampp
Manuel Rausch
Kay Robbins
Paolo M. Rossini
Manuela Ruzzoli
Barbara Schmidt
Magdalena Senderecka
Narayanan Srinivasan
Yannik Stegmann
Paul M. Thompson
Mitchell Valdes-Sosa
Melle J. W. van der Molen
Domenica Veniero
Edelyn Verona
Bradley Voytek
Dezhong Yao
Alan C. Evans
Pedro Valdes-Sosa
One hundred years of EEG for brain and behaviour research.
Faisal Mushtaq
Dominik Welke
Anne Gallagher
Yuri G. Pavlov
Layla Kouara
Jorge Bosch-Bayard
Jasper JF van den Bosch
Mahnaz Arvaneh
Amy R. Bland
Maximilien Chaumon
Cornelius Borck
Xun He
Steven J. Luck
Maro G. Machizawa
Cyril Pernet
Aina Puce
Sidney J. Segalowitz
Christine Rogers
Muhammad Awais
Claudio Babiloni … (voir 75 de plus)
Neil W. Bailey
Sylvain Baillet
Robert C. A. Bendall
Daniel Brady
Maria L. Bringas-Vega
Niko A. Busch
Ana Calzada-Reyes
Armand Chatard
Peter E. Clayson
Michael X. Cohen
Jonathan Cole
Martin Constant
Alexandra Corneyllie
Damien Coyle
Damian Cruse
Ioannis Delis
Arnaud Delorme
Damien Fair
Tiago H. Falk
Matthias Gamer
Giorgio Ganis
Kilian Gloy
Samantha Gregory
Cameron D. Hassall
Katherine E. Hiley
Richard B. Ivry
Michael Jenkins
Jakob Kaiser
Andreas Keil
Robert T. Knight
Silvia Kochen
Boris Kotchoubey
Olave E. Krigolson
Nicolas Langer
Heinrich R. Liesefeld
Sarah Lippé
Raquel E. London
Annmarie MacNamara
Scott Makeig
Welber Marinovic
Eduardo Martínez-Montes
Aleya A. Marzuki
Ryan K. Mathew
Christoph Michel
José d. R. Millán
Mark Mon-Williams
Lilia Morales-Chacón
Richard Naar
Gustav Nilsonne
Guiomar Niso
Erika Nyhus
Robert Oostenveld
Katharina Paul
Walter Paulus
Daniela M. Pfabigan
Gilles Pourtois
Stefan Rampp
Manuel Rausch
Kay Robbins
Paolo M. Rossini
Manuela Ruzzoli
Barbara Schmidt
Magdalena Senderecka
Narayanan Srinivasan
Yannik Stegmann
Paul M. Thompson
Mitchell Valdes-Sosa
Melle J. W. van der Molen
Domenica Veniero
Edelyn Verona
Bradley Voytek
Dezhong Yao
Alan C. Evans
Pedro Valdes-Sosa
One hundred years of EEG for brain and behaviour research.
Faisal Mushtaq
Dominik Welke
Anne Gallagher
Yuri G. Pavlov
Layla Kouara
Jorge Bosch-Bayard
Jasper JF van den Bosch
Mahnaz Arvaneh
Amy R. Bland
Maximilien Chaumon
Cornelius Borck
Xun He
Steven J. Luck
Maro G. Machizawa
Cyril Pernet
Aina Puce
Sidney J. Segalowitz
Christine Rogers
Muhammad Awais
Claudio Babiloni … (voir 75 de plus)
Neil W. Bailey
Sylvain Baillet
Robert C. A. Bendall
Daniel Brady
Maria L. Bringas-Vega
Niko A. Busch
Ana Calzada-Reyes
Armand Chatard
Peter E. Clayson
Michael X. Cohen
Jonathan Cole
Martin Constant
Alexandra Corneyllie
Damien Coyle
Damian Cruse
Ioannis Delis
Arnaud Delorme
Damien Fair
Tiago H. Falk
Matthias Gamer
Giorgio Ganis
Kilian Gloy
Samantha Gregory
Cameron D. Hassall
Katherine E. Hiley
Richard B. Ivry
Michael Jenkins
Jakob Kaiser
Andreas Keil
Robert T. Knight
Silvia Kochen
Boris Kotchoubey
Olave E. Krigolson
Nicolas Langer
Heinrich R. Liesefeld
Sarah Lippé
Raquel E. London
Annmarie MacNamara
Scott Makeig
Welber Marinovic
Eduardo Martínez-Montes
Aleya A. Marzuki
Ryan K. Mathew
Christoph Michel
José d. R. Millán
Mark Mon-Williams
Lilia Morales-Chacón
Richard Naar
Gustav Nilsonne
Guiomar Niso
Erika Nyhus
Robert Oostenveld
Katharina Paul
Walter Paulus
Daniela M. Pfabigan
Gilles Pourtois
Stefan Rampp
Manuel Rausch
Kay Robbins
Paolo M. Rossini
Manuela Ruzzoli
Barbara Schmidt
Magdalena Senderecka
Narayanan Srinivasan
Yannik Stegmann
Paul M. Thompson
Mitchell Valdes-Sosa
Melle J. W. van der Molen
Domenica Veniero
Edelyn Verona
Bradley Voytek
Dezhong Yao
Alan C. Evans
Pedro Valdes-Sosa
One hundred years of EEG for brain and behaviour research.
Faisal Mushtaq
Dominik Welke
Anne Gallagher
Yuri G. Pavlov
Layla Kouara
Jorge Bosch-Bayard
Jasper JF van den Bosch
Mahnaz Arvaneh
Amy R. Bland
Maximilien Chaumon
Cornelius Borck
Xun He
Steven J. Luck
Maro G. Machizawa
Cyril Pernet
Aina Puce
Sidney J. Segalowitz
Christine Rogers
Muhammad Awais
Claudio Babiloni … (voir 75 de plus)
Neil W. Bailey
Sylvain Baillet
Robert C. A. Bendall
Daniel Brady
Maria L. Bringas-Vega
Niko A. Busch
Ana Calzada-Reyes
Armand Chatard
Peter E. Clayson
Michael X. Cohen
Jonathan Cole
Martin Constant
Alexandra Corneyllie
Damien Coyle
Damian Cruse
Ioannis Delis
Arnaud Delorme
Damien Fair
Tiago H. Falk
Matthias Gamer
Giorgio Ganis
Kilian Gloy
Samantha Gregory
Cameron D. Hassall
Katherine E. Hiley
Richard B. Ivry
Michael Jenkins
Jakob Kaiser
Andreas Keil
Robert T. Knight
Silvia Kochen
Boris Kotchoubey
Olave E. Krigolson
Nicolas Langer
Heinrich R. Liesefeld
Sarah Lippé
Raquel E. London
Annmarie MacNamara
Scott Makeig
Welber Marinovic
Eduardo Martínez-Montes
Aleya A. Marzuki
Ryan K. Mathew
Christoph Michel
José d. R. Millán
Mark Mon-Williams
Lilia Morales-Chacón
Richard Naar
Gustav Nilsonne
Guiomar Niso
Erika Nyhus
Robert Oostenveld
Katharina Paul
Walter Paulus
Daniela M. Pfabigan
Gilles Pourtois
Stefan Rampp
Manuel Rausch
Kay Robbins
Paolo M. Rossini
Manuela Ruzzoli
Barbara Schmidt
Magdalena Senderecka
Narayanan Srinivasan
Yannik Stegmann
Paul M. Thompson
Mitchell Valdes-Sosa
Melle J. W. van der Molen
Domenica Veniero
Edelyn Verona
Bradley Voytek
Dezhong Yao
Alan C. Evans
Pedro Valdes-Sosa
One hundred years of EEG for brain and behaviour research.
Faisal Mushtaq
Dominik Welke
Anne Gallagher
Yuri G. Pavlov
Layla Kouara
Jorge Bosch-Bayard
Jasper JF van den Bosch
Mahnaz Arvaneh
Amy R. Bland
Maximilien Chaumon
Cornelius Borck
Xun He
Steven J. Luck
Maro G. Machizawa
Cyril Pernet
Aina Puce
Sidney J. Segalowitz
Christine Rogers
Muhammad Awais
Claudio Babiloni … (voir 75 de plus)
Neil W. Bailey
Sylvain Baillet
Robert C. A. Bendall
Daniel Brady
Maria L. Bringas-Vega
Niko A. Busch
Ana Calzada-Reyes
Armand Chatard
Peter E. Clayson
Michael X. Cohen
Jonathan Cole
Martin Constant
Alexandra Corneyllie
Damien Coyle
Damian Cruse
Ioannis Delis
Arnaud Delorme
Damien Fair
Tiago H. Falk
Matthias Gamer
Giorgio Ganis
Kilian Gloy
Samantha Gregory
Cameron D. Hassall
Katherine E. Hiley
Richard B. Ivry
Michael Jenkins
Jakob Kaiser
Andreas Keil
Robert T. Knight
Silvia Kochen
Boris Kotchoubey
Olave E. Krigolson
Nicolas Langer
Heinrich R. Liesefeld
Sarah Lippé
Raquel E. London
Annmarie MacNamara
Scott Makeig
Welber Marinovic
Eduardo Martínez-Montes
Aleya A. Marzuki
Ryan K. Mathew
Christoph Michel
José d. R. Millán
Mark Mon-Williams
Lilia Morales-Chacón
Richard Naar
Gustav Nilsonne
Guiomar Niso
Erika Nyhus
Robert Oostenveld
Katharina Paul
Walter Paulus
Daniela M. Pfabigan
Gilles Pourtois
Stefan Rampp
Manuel Rausch
Kay Robbins
Paolo M. Rossini
Manuela Ruzzoli
Barbara Schmidt
Magdalena Senderecka
Narayanan Srinivasan
Yannik Stegmann
Paul M. Thompson
Mitchell Valdes-Sosa
Melle J. W. van der Molen
Domenica Veniero
Edelyn Verona
Bradley Voytek
Dezhong Yao
Alan C. Evans
Pedro Valdes-Sosa
Switching between tasks can cause AI to lose the ability to learn
Clare Lyle
Zero-Shot Object-Centric Representation Learning
Aniket Rajiv Didolkar
Andrii Zadaianchuk
Michael Curtis Mozer
Georg Martius
Maximilian Seitzer
The goal of object-centric representation learning is to decompose visual scenes into a structured representation that isolates the entities… (voir plus). Recent successes have shown that object-centric representation learning can be scaled to real-world scenes by utilizing pre-trained self-supervised features. However, so far, object-centric methods have mostly been applied in-distribution, with models trained and evaluated on the same dataset. This is in contrast to the wider trend in machine learning towards general-purpose models directly applicable to unseen data and tasks. Thus, in this work, we study current object-centric methods through the lens of zero-shot generalization by introducing a benchmark comprising eight different synthetic and real-world datasets. We analyze the factors influencing zero-shot performance and find that training on diverse real-world images improves transferability to unseen scenarios. Furthermore, inspired by the success of task-specific fine-tuning in foundation models, we introduce a novel fine-tuning strategy to adapt pre-trained vision encoders for the task of object discovery. We find that the proposed approach results in state-of-the-art performance for unsupervised object discovery, exhibiting strong zero-shot transfer to unseen datasets.
Zero-Shot Object-Centric Representation Learning
Aniket Rajiv Didolkar
Andrii Zadaianchuk
Michael Curtis Mozer
Georg Martius
Maximilian Seitzer
The goal of object-centric representation learning is to decompose visual scenes into a structured representation that isolates the entities… (voir plus). Recent successes have shown that object-centric representation learning can be scaled to real-world scenes by utilizing pre-trained self-supervised features. However, so far, object-centric methods have mostly been applied in-distribution, with models trained and evaluated on the same dataset. This is in contrast to the wider trend in machine learning towards general-purpose models directly applicable to unseen data and tasks. Thus, in this work, we study current object-centric methods through the lens of zero-shot generalization by introducing a benchmark comprising eight different synthetic and real-world datasets. We analyze the factors influencing zero-shot performance and find that training on diverse real-world images improves transferability to unseen scenarios. Furthermore, inspired by the success of task-specific fine-tuning in foundation models, we introduce a novel fine-tuning strategy to adapt pre-trained vision encoders for the task of object discovery. We find that the proposed approach results in state-of-the-art performance for unsupervised object discovery, exhibiting strong zero-shot transfer to unseen datasets.
Zero-Shot Object-Centric Representation Learning
Aniket Rajiv Didolkar
Andrii Zadaianchuk
Michael Curtis Mozer
Georg Martius
Maximilian Seitzer
The goal of object-centric representation learning is to decompose visual scenes into a structured representation that isolates the entities… (voir plus). Recent successes have shown that object-centric representation learning can be scaled to real-world scenes by utilizing pre-trained self-supervised features. However, so far, object-centric methods have mostly been applied in-distribution, with models trained and evaluated on the same dataset. This is in contrast to the wider trend in machine learning towards general-purpose models directly applicable to unseen data and tasks. Thus, in this work, we study current object-centric methods through the lens of zero-shot generalization by introducing a benchmark comprising eight different synthetic and real-world datasets. We analyze the factors influencing zero-shot performance and find that training on diverse real-world images improves transferability to unseen scenarios. Furthermore, inspired by the success of task-specific fine-tuning in foundation models, we introduce a novel fine-tuning strategy to adapt pre-trained vision encoders for the task of object discovery. We find that the proposed approach results in state-of-the-art performance for unsupervised object discovery, exhibiting strong zero-shot transfer to unseen datasets.
Zero-Shot Object-Centric Representation Learning
Aniket Rajiv Didolkar
Andrii Zadaianchuk
Michael Curtis Mozer
Georg Martius
Maximilian Seitzer
The goal of object-centric representation learning is to decompose visual scenes into a structured representation that isolates the entities… (voir plus). Recent successes have shown that object-centric representation learning can be scaled to real-world scenes by utilizing pre-trained self-supervised features. However, so far, object-centric methods have mostly been applied in-distribution, with models trained and evaluated on the same dataset. This is in contrast to the wider trend in machine learning towards general-purpose models directly applicable to unseen data and tasks. Thus, in this work, we study current object-centric methods through the lens of zero-shot generalization by introducing a benchmark comprising eight different synthetic and real-world datasets. We analyze the factors influencing zero-shot performance and find that training on diverse real-world images improves transferability to unseen scenarios. Furthermore, inspired by the success of task-specific fine-tuning in foundation models, we introduce a novel fine-tuning strategy to adapt pre-trained vision encoders for the task of object discovery. We find that the proposed approach results in state-of-the-art performance for unsupervised object discovery, exhibiting strong zero-shot transfer to unseen datasets.
Zero-Shot Object-Centric Representation Learning
Aniket Rajiv Didolkar
Andrii Zadaianchuk
Michael Curtis Mozer
Georg Martius
Maximilian Seitzer
The goal of object-centric representation learning is to decompose visual scenes into a structured representation that isolates the entities… (voir plus). Recent successes have shown that object-centric representation learning can be scaled to real-world scenes by utilizing pre-trained self-supervised features. However, so far, object-centric methods have mostly been applied in-distribution, with models trained and evaluated on the same dataset. This is in contrast to the wider trend in machine learning towards general-purpose models directly applicable to unseen data and tasks. Thus, in this work, we study current object-centric methods through the lens of zero-shot generalization by introducing a benchmark comprising eight different synthetic and real-world datasets. We analyze the factors influencing zero-shot performance and find that training on diverse real-world images improves transferability to unseen scenarios. Furthermore, inspired by the success of task-specific fine-tuning in foundation models, we introduce a novel fine-tuning strategy to adapt pre-trained vision encoders for the task of object discovery. We find that the proposed approach results in state-of-the-art performance for unsupervised object discovery, exhibiting strong zero-shot transfer to unseen datasets.
Understanding the Local Geometry of Generative Model Manifolds
Ahmed Imtiaz Humayun
Candice Schumann
Deep generative models learn continuous representations of complex data manifolds using a finite number of samples during training. For a pr… (voir plus)e-trained generative model, the common way to evaluate the quality of the manifold representation learned, is by computing global metrics like Fr\'echet Inception Distance using a large number of generated and real samples. However, generative model performance is not uniform across the learned manifold, e.g., for \textit{foundation models} like Stable Diffusion generation performance can vary significantly based on the conditioning or initial noise vector being denoised. In this paper we study the relationship between the \textit{local geometry of the learned manifold} and downstream generation. Based on the theory of continuous piecewise-linear (CPWL) generators, we use three geometric descriptors - scaling (