Portrait of Zahra Sheikhbahaee

Zahra Sheikhbahaee

Alumni

Publications

Proceedings of 1st Workshop on Advancing Artificial Intelligence through Theory of Mind
Mouad Abrini
Omri Abend
Dina M. Acklin
Henny Admoni
Gregor Aichinger
Nitay Alon
Zahra Ashktorab
Ashish Atreja
Moises Auron
Alexander Aufreiter
Raghav Awasthi
Soumya Banerjee
Joseph Barnby
Rhea Basappa
Severin Bergsmann
Djallel Bouneffouf
Patrick Callaghan
Marc Cavazza
Thierry Chaminade
Sonia Chernova … (see 88 more)
Mohamed Chetouan
Moumita Choudhury
Axel Cleeremans
J. Cywinski
Fabio Cuzzolin
Hokin Deng
N'yoma Diamond
C. D. Pasquasio
Max J. van Duijn
Mahapatra Dwarikanath
Qingying Gao
Ashok Goel
Rebecca R. Goldstein
Matthew C. Gombolay
Gabriel Enrique Gonzalez
Amar Halilovic
Tobias Halmdienst
Mahimul Islam
Julian Jara-Ettinger
Natalie Kastel
Renana Keydar
Ashish K. Khanna
Mahdi Khoramshahi
Jihyun Kim
Mihyeon Kim
Youngbin Kim
Senka Krivic
Nikita Krasnytskyi
Arun Kumar
Junehyoung Kwon
EunJu Lee
Shane Lee
Peter R. Lewis 0001
Xue Li
Yijiang Li
Michal Lewandowski
Nathan Lloyd
Matthew B. Luebbers
Dezhi Luo
Haiyun Lyu
Dwarikanath Mahapatra
Kamal Maheshwari
Mallika Mainali
P. Mathur
Patrick Mederitsch
Shuwa Miura
Manuel Preston de Miranda
Reuth Mirsky
Shreya Mishra
Nina M. Moorman
Katelyn Morrison
John Muchovej
Bernhard Nessler
Felix Nessler
Hieu Minh Jord Nguyen
Abby Ortego
F. Papay
Antoine Pasquali
Hamed Rahimi
C. Raghu
Amanda L. Royka
Stefan Sarkadi
Jaelle Scheuerman
Simon Schmid
Paul Schrater
Anik Sen
Ke Shi
Reid G. Simmons
Nishant Singh
Mason O. Smith
Ramira van der Meulen
Anthia Solaki
Haoran Sun
Viktor Szolga
Matthew E. Taylor
Travis Taylor
Sanne van Waveren
R. Verbrugge
Eitan Wagner
Justin D. Weisz
Ximing Wen
William Yeoh
Wenlong Zhang
Michelle Zhao
Shlomo Zilberstein
Proceedings of 1st Workshop on Advancing Artificial Intelligence through Theory of Mind
Mouad Abrini
Omri Abend
Dina M. Acklin
Henny Admoni
Gregor Aichinger
Nitay Alon
Zahra Ashktorab
Ashish Atreja
Moises Auron
Alexander Aufreiter
Raghav Awasthi
Soumya Banerjee
Joseph Barnby
Rhea Basappa
Severin Bergsmann
Djallel Bouneffouf
Patrick Callaghan
Marc Cavazza
Thierry Chaminade
Sonia Chernova … (see 88 more)
Mohamed Chetouan
Moumita Choudhury
Axel Cleeremans
J. Cywinski
Fabio Cuzzolin
Hokin Deng
N'yoma Diamond
C. D. Pasquasio
Max J. van Duijn
Mahapatra Dwarikanath
Qingying Gao
Ashok Goel
Rebecca R. Goldstein
Matthew C. Gombolay
Gabriel Enrique Gonzalez
Amar Halilovic
Tobias Halmdienst
Mahimul Islam
Julian Jara-Ettinger
Natalie Kastel
Renana Keydar
Ashish K. Khanna
Mahdi Khoramshahi
Jihyun Kim
Mihyeon Kim
Youngbin Kim
Senka Krivic
Nikita Krasnytskyi
Arun Kumar
Junehyoung Kwon
EunJu Lee
Shane Lee
Peter R. Lewis 0001
Xue Li
Yijiang Li
Michal Lewandowski
Nathan Lloyd
Matthew B. Luebbers
Dezhi Luo
Haiyun Lyu
Dwarikanath Mahapatra
Kamal Maheshwari
Mallika Mainali
P. Mathur
Patrick Mederitsch
Shuwa Miura
Manuel Preston de Miranda
Reuth Mirsky
Shreya Mishra
Nina M. Moorman
Katelyn Morrison
John Muchovej
Bernhard Nessler
Felix Nessler
Hieu Minh Jord Nguyen
Abby Ortego
F. Papay
Antoine Pasquali
Hamed Rahimi
C. Raghu
Amanda L. Royka
Stefan Sarkadi
Jaelle Scheuerman
Simon Schmid
Paul Schrater
Anik Sen
Ke Shi
Reid G. Simmons
Nishant Singh
Mason O. Smith
Ramira van der Meulen
Anthia Solaki
Haoran Sun
Viktor Szolga
Matthew E. Taylor
Travis Taylor
Sanne van Waveren
R. Verbrugge
Eitan Wagner
Justin D. Weisz
Ximing Wen
William Yeoh
Wenlong Zhang
Michelle Zhao
Shlomo Zilberstein
Learning Robust Representations for Transfer in Reinforcement Learning
Roger Creus Castanyer
Hongyao Tang
Learning transferable representations for deep reinforcement learning (RL) is a challenging problem due to the inherent non-stationarity, di… (see more)stribution shift, and unstable training dynamics. To be useful, a transferable representation needs to be robust to such factors. In this work, we introduce a new architecture and training strategy for learning robust representations for transfer learning in RL. We propose leveraging multiple CNN encoders and training them not to specialize in areas of the state space but instead to match each other's representation. We find that learned representations transfer well across many Atari tasks, resulting in better transfer learning performance and data efficiency than training from scratch.
From physics to sentience: Deciphering the semantics of the free-energy principle and evaluating its claims: Comment on "Path integrals, particular kinds, and strange things" by Karl Friston et al.
Adam Safron
Casper Hesp
From physics to sentience: Deciphering the semantics of the free-energy principle and evaluating its claims: Comment on "Path integrals, particular kinds, and strange things" by Karl Friston et al.
Adam Safron
Casper Hesp
Attention Schema in Neural Agents
Dianbo Liu
Samuele Bolotta
Mike He Zhu
Attention has become a common ingredient in deep learning architectures. It adds a dynamical selection of information on top of the static s… (see more)election of information supported by weights. In the same way, we can imagine a higher-order informational filter built on top of attention: an Attention Schema (AS), namely, a descriptive and predictive model of attention. In cognitive neuroscience, Attention Schema Theory (AST) supports this idea of distinguishing attention from AS. A strong prediction of this theory is that an agent can use its own AS to also infer the states of other agents' attention and consequently enhance coordination with other agents. As such, multi-agent reinforcement learning would be an ideal setting to experimentally test the validity of AST. We explore different ways in which attention and AS interact with each other. Our preliminary results indicate that agents that implement the AS as a recurrent internal control achieve the best performance. In general, these exploratory experiments suggest that equipping artificial agents with a model of attention can enhance their social intelligence.