Publications

Towards Few-shot Coordination: Revisiting Ad-hoc Teamplay Challenge In the Game of Hanabi
Hadi Nekoei
Xutong Zhao
Janarthanan Rajendran
Miao Liu
Cooperative Multi-agent Reinforcement Learning (MARL) algorithms with Zero-Shot Coordination (ZSC) have gained significant attention in rece… (voir plus)nt years. ZSC refers to the ability of agents to coordinate zero-shot (without additional interaction experience) with independently trained agents. While ZSC is crucial for cooperative MARL agents, it might not be possible for complex tasks and changing environments. Agents also need to adapt and improve their performance with minimal interaction with other agents. In this work, we show empirically that state-of-the-art ZSC algorithms have poor performance when paired with agents trained with different learning methods, and they require millions of interaction samples to adapt to these new partners. To investigate this issue, we formally defined a framework based on a popular cooperative multi-agent game called Hanabi to evaluate the adaptability of MARL methods. In particular, we created a diverse set of pre-trained agents and defined a new metric called adaptation regret that measures the agent's ability to efficiently adapt and improve its coordination performance when paired with some held-out pool of partners on top of its ZSC performance. After evaluating several SOTA algorithms using our framework, our experiments reveal that naive Independent Q-Learning (IQL) agents in most cases adapt as quickly as the SOTA ZSC algorithm Off-Belief Learning (OBL). This finding raises an interesting research question: How to design MARL algorithms with high ZSC performance and capability of fast adaptation to unseen partners. As a first step, we studied the role of different hyper-parameters and design choices on the adaptability of current MARL algorithms. Our experiments show that two categories of hyper-parameters controlling the training data diversity and optimization process have a significant impact on the adaptability of Hanabi agents.
MARCO: A Memory-Augmented Reinforcement Framework for Combinatorial Optimization
Andoni I. Garmendia
Josu Ceberio
Alexander Mendiburu
Open, Closed, or Small Language Models for Text Classification?
Hao Yu
Zachary Yang
Kellin Pelrine
Jean-François Godbout
Recent advancements in large language models have demonstrated remarkable capabilities across various NLP tasks. But many questions remain, … (voir plus)including whether open-source models match closed ones, why these models excel or struggle with certain tasks, and what types of practical procedures can improve performance. We address these questions in the context of classification by evaluating three classes of models using eight datasets across three distinct tasks: named entity recognition, political party prediction, and misinformation detection. While larger LLMs often lead to improved performance, open-source models can rival their closed-source counterparts by fine-tuning. Moreover, supervised smaller models, like RoBERTa, can achieve similar or even greater performance in many datasets compared to generative LLMs. On the other hand, closed models maintain an advantage in hard tasks that demand the most generalizability. This study underscores the importance of model selection based on task requirements
Pontomedullary junction as a reference for spinal cord cross-sectional area: validation across neck positions
Sandrine Bédard
Maxime Bouthillier
GTM-decon: guided-topic modeling of single-cell transcriptomes enables sub-cell-type and disease-subtype deconvolution of bulk transcriptomes
Lakshmipuram Seshadri Swapna
Michael Huang
YORC: Yoruba Reading Comprehension dataset
Aremu Anuoluwapo
Jesujoba Oluwadara Alabi
In this paper, we create YORC: a new multi-choice Yoruba Reading Comprehension dataset that is based on Yoruba high-school reading comprehen… (voir plus)sion examination. We provide baseline results by performing cross-lingual transfer using existing English RACE dataset based on a pre-trained encoder-only model. Additionally, we provide results by prompting large language models (LLMs) like GPT-4.
Age-related bias and artificial intelligence: a scoping review
Charlene H Chu
Simon Donato-Woodger
Shehroz S Khan
Rune Nyrup
Kathleen Leslie
Alexandra Lyn
Tianyu Shi
Andria Bianchi
Amanda Grenier
Consciousness in Artificial Intelligence: Insights from the Science of Consciousness
Patrick Mark Butlin
R. Long
Eric Elmoznino
Jonathan C. P. Birch
Axel Constant
George Deane
S. Fleming
C. Frith
Xuanxiu Ji
Ryota Kanai
C. Klein
Grace W. Lindsay
Matthias Michel
Liad Mudrik
Megan A. K. Peters
Eric Schwitzgebel
Jonathan Simon
Rufin Vanrullen
Hitting the High-Dimensional Notes: An ODE for SGD learning dynamics on GLMs and multi-index models
Elizabeth Collins-Woodfin
Elliot Paquette
Inbar Seroussi
AstroPhot: Fitting Everything Everywhere All at Once in Astronomical Images
Connor J Stone
Stéphane Courteau
Jean-Charles Cuillandre
Nikhil Arora
BamQuery: a proteogenomic tool to explore the immunopeptidome and prioritize actionable tumor antigens
Maria-Virginia Ruiz Cuevas
Marie-Pierre Hardy
Jean-David Larouche
Anca Apavaloaei
Eralda Kina
Krystel Vincent
Patrick Gendron
Jean-Philippe Laverdure
Chantal Durette
Pierre Thibault
Claude Perreault
Grégory Ehx
Morphological Parameters and Associated Uncertainties for 8 Million Galaxies in the Hyper Suprime-Cam Wide Survey
Aritra Ghosh
C. Urry
Aayush Mishra
P. Natarajan
D. Sanders
Daisuke Nagai
Chuan Tian
Nico Cappelluti
J. Kartaltepe
M. Powell
Amrit Rau
Ezequiel Treister
We use the Galaxy Morphology Posterior Estimation Network (GaMPEN) to estimate morphological parameters and associated uncertainties for ∼… (voir plus)8 million galaxies in the Hyper Suprime-Cam Wide survey with z ≤ 0.75 and m ≤ 23. GaMPEN is a machine-learning framework that estimates Bayesian posteriors for a galaxy’s bulge-to-total light ratio (L B /L T ), effective radius (R e ), and flux (F). By first training on simulations of galaxies and then applying transfer learning using real data, we trained GaMPEN with 1% of our data set. This two-step process will be critical for applying machine-learning algorithms to future large imaging surveys, such a