Portrait of Baihan Lin is unavailable

Baihan Lin

Alumni

Publications

A Python Toolbox for Representational Similarity Analysis
Jasper JF van den Bosch
Tal Golan
Benjamin Peters
JohnMark Taylor
Mahdiyar Shahbazi
Jörn Diedrichsen
Nikolaus Kriegeskorte
Marieke Mur
Heiko H. Schütt
A Survey on Compositional Generalization in Applications
Djallel Bouneffouf
An Empirical Study of Human Behavioral Agents in Bandits, Contextual Bandits and Reinforcement Learning.
Guillermo Cecchi
Djallel Bouneffouf
Jenna Reinen
Artificial behavioral agents are often evaluated based on their consistent behaviors and performance to take sequential actions in an enviro… (see more)nment to maximize some notion of cumulative reward. However, human decision making in real life usually involves different strategies and behavioral trajectories that lead to the same empirical outcome. Motivated by clinical literature of a wide range of neurological and psychiatric disorders, we propose here a more general and flexible parametric framework for sequential decision making that involves a two-stream reward processing mechanism. We demonstrated that this framework is flexible and unified enough to incorporate a family of problems spanning multi-armed bandits (MAB), contextual bandits (CB) and reinforcement learning (RL), which decompose the sequential decision making process in different levels. Inspired by the known reward processing abnormalities of many mental disorders, our clinically-inspired agents demonstrated interesting behavioral trajectories and comparable performance on simulated tasks with particular reward distributions, a real-world dataset capturing human decision-making in gambling tasks, and the PacMan game across different reward stationarities in a lifelong learning setting.
Unified Models of Human Behavioral Agents in Bandits, Contextual Bandits and RL
Guillermo Cecchi
Djallel Bouneffouf
Jenna Reinen
Models of Human Behavioral Agents in Bandits, Contextual Bandits and RL
Guillermo Cecchi
Djallel Bouneffouf
Jenna Reinen
A Story of Two Streams: Reinforcement Learning Models from Human Behavior and Neuropsychiatry
Guillermo Cecchi
Djallel Bouneffouf
Jenna Reinen
Drawing an inspiration from behavioral studies of human decision making, we propose here a more general and flexible parametric framework fo… (see more)r reinforcement learning that extends standard Q-learning to a two-stream model for processing positive and negative rewards, and allows to incorporate a wide range of reward-processing biases -- an important component of human decision making which can help us better understand a wide spectrum of multi-agent interactions in complex real-world socioeconomic systems, as well as various neuropsychiatric conditions associated with disruptions in normal reward processing. From the computational perspective, we observe that the proposed Split-QL model and its clinically inspired variants consistently outperform standard Q-Learning and SARSA methods, as well as recently proposed Double Q-Learning approaches, on simulated tasks with particular reward distributions, a real-world dataset capturing human decision-making in gambling tasks, and the Pac-Man game in a lifelong learning setting across different reward stationarities.
Reinforcement Learning Models of Human Behavior: Reward Processing in Mental Disorders
Guillermo Cecchi
Djallel Bouneffouf
Jenna Reinen
Drawing an inspiration from behavioral studies of human decision making, we propose here a general parametric framework for a reinforcement … (see more)learning problem, which extends the standard Q-learning approach to incorporate a two-stream framework of reward processing with biases biologically associated with several neurological and psychiatric conditions, including Parkinson's and Alzheimer's diseases, attention-deficit/hyperactivity disorder (ADHD), addiction, and chronic pain. For the AI community, the development of agents that react differently to different types of rewards can enable us to understand a wide spectrum of multi-agent interactions in complex real-world socioeconomic systems. Empirically, the proposed model outperforms Q-Learning and Double Q-Learning in artificial scenarios with certain reward distributions and real-world human decision making gambling tasks. Moreover, from the behavioral modeling perspective, our parametric framework can be viewed as a first step towards a unifying computational model capturing reward processing abnormalities across multiple mental conditions and user preferences in long-term recommendation systems.