Publications

StarCoder 2 and The Stack v2: The Next Generation
Anton Lozhkov
Raymond Li
Loubna Ben allal
Federico Cassano
Joel Lamy-Poirier
Nouamane Tazi
Ao Tang
Dmytro Pykhtar
Jiawei Liu
Yuxiang Wei
Tianyang Liu
Max Tian
Denis Kocetkov
Arthur Zucker
Younes Belkada
Zijian Wang
Qian Liu
Dmitry Abulkhanov
Indraneil Paul
Zhuang Li … (see 46 more)
Wen-Ding Li
Megan L. Risdal
Jia LI
Jian Zhu
Terry Yue Zhuo
Evgenii Zheltonozhskii
Nii Osae Osae Dade
Wenhao Yu
Lucas Krauss
Naman Jain
Yixuan Su
Xuanli He
Manan Dey
Edoardo Abati
Yekun Chai
Niklas Muennighoff
Xiangru Tang
Muhtasham Oblokulov
Christopher Akiki
Marc Marone
Chenghao Mou
Mayank Mishra
Alex Gu
Binyuan Hui
Tri Dao
Armel Zebaze
Olivier Dehaene
Nicolas Patry
Canwen Xu
Julian McAuley
Han Hu
Torsten Scholak
Sebastien Paquet
Jennifer Robinson
Carolyn Jane Anderson
Mostofa Ali Patwary
Nima Tajbakhsh
Yacine Jernite
Carlos Muñoz Ferrandis
Lingming Zhang
Sean Hughes
Thomas Wolf
Arjun Guha
Leandro Von Werra
Harm de Vries
The BigCode project, an open-scientific collaboration focused on the responsible development of Large Language Models for Code (Code LLMs), … (see more)introduces StarCoder2. In partnership with Software Heritage (SWH), we build The Stack v2 on top of the digital commons of their source code archive. Alongside the SWH repositories spanning 619 programming languages, we carefully select other high-quality data sources, such as GitHub pull requests, Kaggle notebooks, and code documentation. This results in a training set that is 4x larger than the first StarCoder dataset. We train StarCoder2 models with 3B, 7B, and 15B parameters on 3.3 to 4.3 trillion tokens and thoroughly evaluate them on a comprehensive set of Code LLM benchmarks. We find that our small model, StarCoder2-3B, outperforms other Code LLMs of similar size on most benchmarks, and also outperforms StarCoderBase-15B. Our large model, StarCoder2- 15B, significantly outperforms other models of comparable size. In addition, it matches or outperforms CodeLlama-34B, a model more than twice its size. Although DeepSeekCoder- 33B is the best-performing model at code completion for high-resource languages, we find that StarCoder2-15B outperforms it on math and code reasoning benchmarks, as well as several low-resource languages. We make the model weights available under an OpenRAIL license and ensure full transparency regarding the training data by releasing the SoftWare Heritage persistent IDentifiers (SWHIDs) of the source code data.
StarCoder 2 and The Stack v2: The Next Generation
Anton Lozhkov
Raymond Li
Loubna Ben allal
Federico Cassano
Joel Lamy-Poirier
Nouamane Tazi
Ao Tang
Dmytro Pykhtar
Jiawei Liu
Yuxiang Wei
Tianyang Liu
Max Tian
Denis Kocetkov
Arthur Zucker
Younes Belkada
Zijian Wang
Qian Liu
Dmitry Abulkhanov
Indraneil Paul
Zhuang Li … (see 46 more)
Wen-Ding Li
Megan L. Risdal
Jia LI
Jian Zhu
Terry Yue Zhuo
Evgenii Zheltonozhskii
Nii Osae Osae Dade
Wenhao Yu
Lucas Krauss
Naman Jain
Yixuan Su
Xuanli He
Manan Dey
Edoardo Abati
Yekun Chai
Niklas Muennighoff
Xiangru Tang
Muhtasham Oblokulov
Christopher Akiki
Marc Marone
Chenghao Mou
Mayank Mishra
Alex Gu
Binyuan Hui
Tri Dao
Armel Zebaze
Olivier Dehaene
Nicolas Patry
Canwen Xu
Julian McAuley
Han Hu
Torsten Scholak
Sebastien Paquet
Jennifer Robinson
Carolyn Jane Anderson
Md. Mostofa Ali Patwary
Nima Tajbakhsh
Yacine Jernite
Carlos Muñoz Ferrandis
Lingming Zhang
Sean Hughes
Thomas Wolf
Arjun Guha
Leandro Von Werra
Harm de Vries
The BigCode project, an open-scientific collaboration focused on the responsible development of Large Language Models for Code (Code LLMs), … (see more)introduces StarCoder2. In partnership with Software Heritage (SWH), we build The Stack v2 on top of the digital commons of their source code archive. Alongside the SWH repositories spanning 619 programming languages, we carefully select other high-quality data sources, such as GitHub pull requests, Kaggle notebooks, and code documentation. This results in a training set that is 4x larger than the first StarCoder dataset. We train StarCoder2 models with 3B, 7B, and 15B parameters on 3.3 to 4.3 trillion tokens and thoroughly evaluate them on a comprehensive set of Code LLM benchmarks. We find that our small model, StarCoder2-3B, outperforms other Code LLMs of similar size on most benchmarks, and also outperforms StarCoderBase-15B. Our large model, StarCoder2- 15B, significantly outperforms other models of comparable size. In addition, it matches or outperforms CodeLlama-34B, a model more than twice its size. Although DeepSeekCoder- 33B is the best-performing model at code completion for high-resource languages, we find that StarCoder2-15B outperforms it on math and code reasoning benchmarks, as well as several low-resource languages. We make the model weights available under an OpenRAIL license and ensure full transparency regarding the training data by releasing the SoftWare Heritage persistent IDentifiers (SWHIDs) of the source code data.
The use of dose surface maps as a tool to investigate spatial dose delivery accuracy for the rectum during prostate radiotherapy
Haley Patrick
When does word order matter and when doesn't it?
Xuanda Chen
Timothy John O'donnell
Language models (LMs) may appear insensitive to word order changes in natural language understanding (NLU) tasks. In this paper, we propose … (see more)that linguistic redundancy can explain this phenomenon, whereby word order and other linguistic cues such as case markers provide overlapping and thus redundant information. Our hypothesis is that models exhibit insensitivity to word order when the order provides redundant information, and the degree of insensitivity varies across tasks. We quantify how informative word order is using mutual information (MI) between unscrambled and scrambled sentences. Our results show the effect that the less informative word order is, the more consistent the model's predictions are between unscrambled and scrambled sentences. We also find that the effect varies across tasks: for some tasks, like SST-2, LMs' prediction is almost always consistent with the original one even if the Pointwise-MI (PMI) changes, while for others, like RTE, the consistency is near random when the PMI gets lower, i.e., word order is really important.
Acoustic tactile sensing for mobile robot wheels
Wilfred Mason
David Brenken
Falcon Z. Dai
Ricardo Gonzalo Cruz Castillo
Olivier St-Martin Cormier
ICE-SEARCH: A Language Model-Driven Feature Selection Approach
Tianze Yang
Tianyi Yang
Shaoshan Liu
Fuyuan Lyu
This study unveils the In-Context Evolutionary Search (ICE-SEARCH) method, the first work that melds language models (LMs) with evolutionary… (see more) algorithms for feature selection (FS) tasks and demonstrates its effectiveness in Medical Predictive Analytics (MPA) applications. ICE-SEARCH harnesses the crossover and mutation capabilities inherent in LMs within an evolutionary framework, significantly improving FS through the model's comprehensive world knowledge and its adaptability to a variety of roles. Our evaluation of this methodology spans three crucial MPA tasks: stroke, cardiovascular disease, and diabetes, where ICE-SEARCH outperforms traditional FS methods in pinpointing essential features for medical applications. ICE-SEARCH achieves State-of-the-Art (SOTA) performance in stroke prediction and diabetes prediction; the Decision-Randomized ICE-SEARCH ranks as SOTA in cardiovascular disease prediction. Our results not only demonstrate the efficacy of ICE-SEARCH in medical FS but also underscore the versatility, efficiency, and scalability of integrating LMs in FS tasks. The study emphasizes the critical role of incorporating domain-specific insights, illustrating ICE-SEARCH's robustness, generalizability, and swift convergence. This opens avenues for further research into comprehensive and intricate FS landscapes, marking a significant stride in the application of artificial intelligence in medical predictive analytics.
On the Challenges and Opportunities in Generative AI
Laura Manduchi
Kushagra Pandey
Robert Bamler
Ryan Cotterell
Sina Daubener
Sophie Fellenz
Asja Fischer
Thomas Gartner
Matthias Kirchler
Marius Kloft
Yingzhen Li
Christoph Lippert
Gerard de Melo
Eric Nalisnick
Bjorn Ommer
Rajesh Ranganath
Maja Rudolph
Karen Ullrich
Guy Van den Broeck
Julia E Vogt … (see 5 more)
Yixin Wang
Florian Wenzel
Stephan Mandt
Vincent Fortuin
A density estimation perspective on learning from pairwise human preferences
Vincent Dumoulin
Daniel D. Johnson
Yann Dauphin
Learning from human feedback (LHF) -- and in particular learning from pairwise preferences -- has recently become a crucial ingredient in tr… (see more)aining large language models (LLMs), and has been the subject of much research. Most recent works frame it as a reinforcement learning problem, where a reward function is learned from pairwise preference data and the LLM is treated as a policy which is adapted to maximize the rewards, often under additional regularization constraints. We propose an alternative interpretation which centers on the generative process for pairwise preferences and treats LHF as a density estimation problem. We provide theoretical and empirical results showing that for a family of generative processes defined via preference behavior distribution equations, training a reward function on pairwise preferences effectively models an annotator's implicit preference distribution. Finally, we discuss and present findings on"annotator misspecification"-- failure cases where wrong modeling assumptions are made about annotator behavior, resulting in poorly-adapted models -- suggesting that approaches that learn from pairwise human preferences could have trouble learning from a population of annotators with diverse viewpoints.
RAMEN Unveils Clinical Variable Networks for COVID-19 Severity and Long COVID Using Absorbing Random Walks and Genetic Algorithms
Yiwei Xiong
Jingtao Wang
Xiaoxiao Shang
Tingting Chen
Douglas D. Fraser
Gregory Fonseca
Simon Rousseau
The COVID-19 pandemic has significantly altered global socioeconomic structures and individual lives. Understanding the disease mechanisms a… (see more)nd facilitating diagnosis requires comprehending the complex interplay among clinical factors like demographics, symptoms, comorbidities, treatments, lab results, complications, and other metrics, and their relation to outcomes such as disease severity and long term outcomes (e.g., post-COVID-19 condition/long COVID). Conventional correlational methods struggle with indirect and directional connections among these factors, while standard graphical methods like Bayesian networks are computationally demanding for extensive clinical variables. In response, we introduced RAMEN, a methodology that integrates Genetic Algorithms with random walks for efficient Bayesian network inference, designed to map the intricate relationships among clinical variables. Applying RAMEN to the Biobanque québécoise de la COVID-19 (BQC19) dataset, we identified critical markers for long COVID and varying disease severity. The Bayesian Network, corroborated by existing literature and supported through multi-omics analyses, highlights significant clinical variables linked to COVID-19 outcomes. RAMEN’s ability to accurately map these connections contributes substantially to developing early and effective diagnostics for severe COVID-19 and long COVID.
Effective Latent Differential Equation Models via Attention and Multiple Shooting
Germán Abrevaya
Mahta Ramezanian-Panahi
Jean-Christophe Gagnon-Audet
Pablo Polosecki
Silvina Ponce Dawson
Guillermo Cecchi
Correction to: Multi-agent reinforcement learning for fast-timescale demand response of residential loads
Vincent Mai
Philippe Maisonneuve
Tianyu Zhang
Hadi Nekoei
Intra-Host Evolution Analyses in an Immunosuppressed Patient Supports SARS-CoV-2 Viral Reservoir Hypothesis
Dominique Fournelle
Fatima Mostefai
Elsa Brunet-Ratnasingham
Raphael Poujol
Jean-Christophe Grenier
José Héctor Gálvez
Amélie Pagliuzza
Inès Levade
Sandrine Moreira
Mehdi Benlarbi
Guillaume Beaudoin-Bussières
Gabrielle Gendron-Lepage
Catherine Bourassa
Alexandra Tauzin
Simon Grandjean Lapierre
Nicolas Chomont
Andrés Finzi
Daniel E. Kaufmann
Morgan Craig