MuLoCo: Muon is a practical inner optimizer for DiLoCo
Benjamin Thérien
Xiaolong Huang
Multi-Modal Language Models as Text-to-Image Model Evaluators
Jiahui Chen
Candace Ross
Reyhane Askari Hemmat
Koustuv Sinha
Melissa Hall
Michal Drozdzal
Multi-Modal Language Models as Text-to-Image Model Evaluators
Jiahui Chen
Candace Ross
Reyhane Askari Hemmat
Koustuv Sinha
Melissa Hall
Michal Drozdzal
Network Sparsity Unlocks the Scaling Potential of Deep Reinforcement Learning
Guozheng Ma
Lu Li
Zilin Wang
Li Shen
Dacheng Tao
Effectively scaling up deep reinforcement learning models has proven notoriously difficult due to network pathologies during training, moti… (see more)vating various targeted interventions such as periodic reset and architectural advances such as layer normalization. Instead of pursuing more complex modifications, we show that introducing static network sparsity alone can unlock further scaling potential beyond their dense counterparts with state-of-the-art architectures. This is achieved through simple one-shot random pruning, where a predetermined percentage of network weights are randomly removed once before training. Our analysis reveals that, in contrast to naively scaling up dense DRL networks, such sparse networks achieve both higher parameter efficiency for network expressivity and stronger resistance to optimization challenges like plasticity loss and gradient interference. We further extend our evaluation to visual and streaming RL scenarios, demonstrating the consistent benefits of network sparsity.
Outsourced diffusion sampling: Efficient posterior inference in latent spaces of generative models
Siddarth Venkatraman
Mohsin Hasan
Minsu Kim
Luca Scimeca
Marcin Sendera
Nikolay Malkin
Any well-behaved generative model over a variable …
Plasticity as the Mirror of Empowerment
David Abel
Michael Bowling
Andre Barreto
Will Dabney
Shi Dong
Steven Hansen
Anna Harutyunyan
Clare Lyle
Georgios Piliouras
Jonathan Richens
Mark Rowland
Tom Schaul
Satinder Singh
PoisonBench: Assessing Language Model Vulnerability to Poisoned Preference Data
Tingchen Fu
Mrinank Sharma
Philip Torr
Shay B. Cohen
Fazl Barez
Preference learning is a central component for aligning current LLMs, but this process can be vulnerable to data poisoning attacks. To addre… (see more)ss this concern, we introduce PoisonBench, a benchmark for evaluating large language models' susceptibility to data poisoning during preference learning. Data poisoning attacks can manipulate large language model responses to include hidden malicious content or biases, potentially causing the model to generate harmful or unintended outputs while appearing to function normally. We deploy two distinct attack types across eight realistic scenarios, assessing 22 widely-used models. Our findings reveal concerning trends: (1) Scaling up parameter size does not always enhance resilience against poisoning attacks and the influence on model resilience varies among different model suites. (2) There exists a log-linear relationship between the effects of the attack and the data poison ratio; (3) The effect of data poisoning can generalize to extrapolated triggers that are not included in the poisoned data. These results expose weaknesses in current preference learning techniques, highlighting the urgent need for more robust defenses against malicious models and data manipulation.
Position: Probabilistic Modelling is Sufficient for Causal Inference
Bruno Mlodozeniec
Richard E. Turner
Proceedings of 1st Workshop on Advancing Artificial Intelligence through Theory of Mind
Mouad Abrini
Omri Abend
Dina M. Acklin
Henny Admoni
Gregor Aichinger
Nitay Alon
Zahra Ashktorab
Ashish Atreja
Moises Auron
Alexander Aufreiter
Raghav Awasthi
Soumya Banerjee
Joseph Barnby
Rhea Basappa
Severin Bergsmann
Djallel Bouneffouf
Patrick Callaghan
Marc Cavazza
Thierry Chaminade
Sonia Chernova … (see 88 more)
Mohamed Chetouan
Moumita Choudhury
Axel Cleeremans
J. Cywinski
Fabio Cuzzolin
Hokin Deng
N'yoma Diamond
C. D. Pasquasio
Max J. van Duijn
Mahapatra Dwarikanath
Qingying Gao
Ashok Goel
Rebecca R. Goldstein
Matthew C. Gombolay
Gabriel Enrique Gonzalez
Amar Halilovic
Tobias Halmdienst
Mahimul Islam
Julian Jara-Ettinger
Natalie Kastel
Renana Keydar
Ashish K. Khanna
Mahdi Khoramshahi
Jihyun Kim
Mihyeon Kim
Youngbin Kim
Senka Krivic
Nikita Krasnytskyi
Arun Kumar
Junehyoung Kwon
EunJu Lee
Shane Lee
Peter R. Lewis 0001
Xue Li
Yijiang Li
Michal Lewandowski
Nathan Lloyd
Matthew B. Luebbers
Dezhi Luo
Haiyun Lyu
Dwarikanath Mahapatra
Kamal Maheshwari
Mallika Mainali
P. Mathur
Patrick Mederitsch
Shuwa Miura
Manuel Preston de Miranda
Reuth Mirsky
Shreya Mishra
Nina M. Moorman
Katelyn Morrison
John Muchovej
Bernhard Nessler
Felix Nessler
Hieu Minh Jord Nguyen
Abby Ortego
F. Papay
Antoine Pasquali
Hamed Rahimi
C. Raghu
Amanda L. Royka
Stefan Sarkadi
Jaelle Scheuerman
Simon Schmid
Paul Schrater
Anik Sen
Zahra Sheikhbahaee
Ke Shi
Reid G. Simmons
Nishant Singh
Mason O. Smith
Ramira van der Meulen
Anthia Solaki
Haoran Sun
Viktor Szolga
Matthew E. Taylor
Travis Taylor
Sanne van Waveren
Juan David Vargas
R. Verbrugge
Eitan Wagner
Justin D. Weisz
Ximing Wen
William Yeoh
Wenlong Zhang
Michelle Zhao
Shlomo Zilberstein
Putting the Value Back in RL: Better Test-Time Scaling by Unifying LLM Reasoners With Verifiers
Kusha Sareen
Morgane M Moss
Arian Hosseini
Real-time fine finger motion decoding for transradial amputees with surface electromyography
Zihan Weng
Yang Xiao
Peiyang Li
Chanlin Yi
Hailin Ma
Guang Yao
Yuan Lin
Fali Li
Dezhong Yao 0001
Jingming Hou
Yangsong Zhang
Peng Xu
REARANK: Reasoning Re-ranking Agent via Reinforcement Learning
Le Zhang
Bo Wang
Xipeng Qiu
We present REARANK, a large language model (LLM)-based listwise reasoning reranking agent. REARANK explicitly reasons before reranking, sign… (see more)ificantly improving both performance and interpretability. Leveraging reinforcement learning and data augmentation, REARANK achieves substantial improvements over baseline models across popular information retrieval benchmarks, notably requiring only 179 annotated samples. Built on top of Qwen2.5-7B, our REARANK-7B demonstrates performance comparable to GPT-4 on both in-domain and out-of-domain benchmarks and even surpasses GPT-4 on reasoning-intensive BRIGHT benchmarks. These results underscore the effectiveness of our approach and highlight how reinforcement learning can enhance LLM reasoning capabilities in reranking.