Towards Few-shot Coordination: Revisiting Ad-hoc Teamplay Challenge In the Game of Hanabi
Hadi Nekoei
Xutong Zhao
Janarthanan Rajendran
Miao Liu
Assessing the Security of GitHub Copilot's Generated Code - A Targeted Replication Study
Vahid Majdinasab
Michael Joshua Bishop
Shawn Rasheed
Arghavan Moradi Dakhel
Amjed Tahir
Assessing the Security of GitHub Copilot's Generated Code - A Targeted Replication Study
Vahid Majdinasab
Michael Joshua Bishop
Shawn Rasheed
Arghavan Moradi Dakhel
Amjed Tahir
Inferring dynamic regulatory interaction graphs from time series data with perturbations
Dhananjay Bhaskar
Daniel Sumner Magruder
Edward De Brouwer
Matheo Morales
Aarthi Venkat
Frederik Wenkel
MUDiff: Unified Diffusion for Complete Molecule Generation
Chenqing Hua
Sitao Luan
Minkai Xu
Zhitao Ying
Rex Ying
Jie Fu
Stefano Ermon
The evidence mismatch in pediatric surgical practice
Marina Broomfield
Zena Agabani
Elena Guadagno
Robert Baird
Differentiable visual computing for inverse problems and machine learning
Andrew Spielberg
Fangcheng Zhong
Konstantinos Rematas
Krishna Murthy
Cengiz Oztireli
Tzu-Mao Li
From physics to sentience: Deciphering the semantics of the free-energy principle and evaluating its claims: Comment on "Path integrals, particular kinds, and strange things" by Karl Friston et al.
Zahra Sheikhbahaee
Adam Safron
Casper Hesp
Leveraging Function Space Aggregation for Federated Learning at Scale
Nikita Dhawan
Nicole Elyse Mitchell
Zachary Charles
Zachary Garrett
The federated learning paradigm has motivated the development of methods for aggregating multiple client updates into a global server model,… (see more) without sharing client data. Many federated learning algorithms, including the canonical Federated Averaging (FedAvg), take a direct (possibly weighted) average of the client parameter updates, motivated by results in distributed optimization. In this work, we adopt a function space perspective and propose a new algorithm, FedFish, that aggregates local approximations to the functions learned by clients, using an estimate based on their Fisher information. We evaluate FedFish on realistic, large-scale cross-device benchmarks. While the performance of FedAvg can suffer as client models drift further apart, we demonstrate that FedFish is more robust to longer local training. Our evaluation across several settings in image and language benchmarks shows that FedFish outperforms FedAvg as local training epochs increase. Further, FedFish results in global networks that are more amenable to efficient personalization via local fine-tuning on the same or shifted data distributions. For instance, federated pretraining on the C4 dataset, followed by few-shot personalization on Stack Overflow, results in a 7% improvement in next-token prediction by FedFish over FedAvg.
Generalizable Imitation Learning Through Pre-Trained Representations
Wei-Di Chang
Francois Hogan
In this paper we leverage self-supervised vision transformer models and their emergent semantic abilities to improve the generalization abil… (see more)ities of imitation learning policies. We introduce BC-ViT, an imitation learning algorithm that leverages rich DINO pre-trained Visual Transformer (ViT) patch-level embeddings to obtain better generalization when learning through demonstrations. Our learner sees the world by clustering appearance features into semantic concepts, forming stable keypoints that generalize across a wide range of appearance variations and object types. We show that this representation enables generalized behaviour by evaluating imitation learning across a diverse dataset of object manipulation tasks. Our method, data and evaluation approach are made available to facilitate further study of generalization in Imitation Learners.
Adaptive Integration of Categorical and Multi-relational Ontologies with EHR Data for Medical Concept Embedding
Chin Wang Cheong
Kejing Yin
William K. Cheung
Jonathan Poon
AfroBench: How Good are Large Language Models on African Languages?
Jessica Ojo
Kelechi Ogueji
Pontus Stenetorp