Portrait de Manan Dey n'est pas disponible

Manan Dey

Alumni

Publications

MMTEB: Massive Multilingual Text Embedding Benchmark
Kenneth Enevoldsen
Isaac Chung
Márton Kardos
Ashwin Mathur
David Stap
Wissam Siblini
Dominik Krzemiński
Genta Indra Winata
Saba Sturua
Saiteja Utpala
Mathieu Ciancone
Marion Schaeffer
Gabriel Sequeira
Shreeya Dhakal
Jonathan Rystrøm
Roman Solomatin
Ömer Çağatan … (voir 66 de plus)
Akash Kundu
Martin Bernstorff
Shitao Xiao
Akshita Sukhlecha
Bhavish Pahwa
Rafał Poświata
Kranthi Kiran GV
Shawon Ashraf
Daniel Auras
Björn Plüster
Jan Philipp Harries
Loïc Magne
Isabelle Mohr
Mariya Hendriksen
Dawei Zhu
Hippolyte Gisserot-Boukhlef
Tom Aarsen
Jan Kostkan
Konrad Wojtasik
Taemin Lee
Marek Šuppa
Crystina Zhang
Roberta Rocca
Mohammed Hamdy
Andrianos Michail
John Yang
Manuel Faysse
Aleksei Vatolin
Nandan Thakur
Dipam Vasani
Pranjal Chitale
Simone Tedeschi
Nguyen Tai
Artem Snegirev
Michael Günther
Mengzhou Xia
Weijia Shi
Jordan Clive
Gayatri Krishnakumar
Anna Maksimova
Silvan Wehrli
Maria Tikhonova
Henil Panchal
Aleksandr Abramov
Malte Ostendorff
Zheng Liu
Simon Clematide
Lester James Miranda
Alena Fenogenova
Guangyu Song
Ruqiya Bin Safi
Wen-Ding Li
Alessia Borghini
Federico Cassano
Hongjin Su
Jimmy Lin
Howard Yen
Lasse Hansen
Sara Hooker
Chenghao Xiao
Orion Weller
Niklas Muennighoff
Text embeddings are typically evaluated on a limited set of tasks, which are constrained by language, domain, and task diversity. To address… (voir plus) these limitations and provide a more comprehensive evaluation, we introduce the Massive Multilingual Text Embedding Benchmark (MMTEB) - a large-scale, community-driven expansion of MTEB, covering over 500 quality-controlled evaluation tasks across 250+ languages. MMTEB includes a diverse set of challenging, novel tasks such as instruction following, long-document retrieval, and code retrieval, representing the largest multilingual collection of evaluation tasks for embedding models to date. Using this collection, we develop several highly multilingual benchmarks, which we use to evaluate a representative set of models. We find that while large language models (LLMs) with billions of parameters can achieve state-of-the-art performance on certain language subsets and task categories, the best-performing publicly available model is multilingual-e5-large-instruct with only 560 million parameters. To facilitate accessibility and reduce computational cost, we introduce a novel downsampling method based on inter-task correlation, ensuring a diverse selection while preserving relative model rankings. Furthermore, we optimize tasks such as retrieval by sampling hard negatives, creating smaller but effective splits. These optimizations allow us to introduce benchmarks that drastically reduce computational demands. For instance, our newly introduced zero-shot English benchmark maintains a ranking order similar to the full-scale version but at a fraction of the computational cost.
On the Analysis and Distillation of Emergent Outlier Properties in Pre-trained Language Models
Tianyang Zhao
Kunwar Yashraj Singh
Srikar Appalaraju
Peng Tang
Ying Nian Wu
Li Erran Li
Li
Nino Vieillard
Yongchao Zhou
Piotr Stańczyk
Sabela Ramos Garea
Matthieu Geist
Rohan Anil
Andrew M. Dai
Melvin Orhan Firat
Dmitry Lepikhin
Alexandre Passos
Siamak Shakeri
Emanuel Taropa … (voir 478 de plus)
Paige Bailey
Zhifeng Chen
Eric Chu
Jonathan H. Clark
Laurent El
Yanping Huang
K. Meier-Hellstern
Gaurav Mishra
Erica Moreira
Mark Omernick
Kevin Robinson
Sebastian Ruder
Yi Tay
Kefan Xiao
Yuanzhong Xu
Yujing Zhang
Gustavo Hernández Abrego
Junwhan Ahn
Jacob Austin
Paul R. Barham
Jan Botha
James Bradbury
Siddhartha Brahma
Kevin Brooks
M. Catasta
Yong Cheng
Colin Cherry
Christopher A. Choquette-Choo
Aakanksha Chowdhery
Clé-ment Crepy
Shachi Dave
Mostafa Dehghani
Sunipa Dev
Jacob Devlin
Mark Díaz
Nan Du
Ethan Dyer
Vladimir Feinberg
Fangxiaoyu Feng
Vlad Fienber
Markus Freitag
Xavier Garcia
Sebastian Gehrmann
Lucas Gonzalez
Guy Gur-Ari
Steven Hand
Hadi Hashemi
Le Hou
Joshua Howland
Andrea Hu
Jeffrey Hui
Jeremy Hur-witz
Michael Acheson Isard
Abe Ittycheriah
Matthew Jagiel-ski
Wenhao Jia
Kathleen Kenealy
M. Krikun
Sneha Kudugunta 0001
Chang Lan
Kather-ine Lee
Benjamin Lee
Music Eric Li
Wei Li
YaGuang Li
Li Jian
Hyeontaek Li
Hanzhao Lim
Zhongtao Lin
Liu Frederick
Marcello Liu
Aroma Maggioni
Mahendru Joshua
Vedant Maynez
Maysam Misra
Moussalem Zachary
John Nado
E. Nham
Andrew Ni
Alicia Nys-trom
Marie Parrish
M. Pellat
Polacek Alex
Reiner Polozov
Siyuan Pope
Emily Qiao
Reif Bryan
Parker Richter
Alex Riley
Castro Ros
Aurko Roy
Brennan Saeta
Rajkumar Samuel
Renee Shelby
Ambrose Slone
Daniel Smilkov
David R. So
Daniel Sohn
Simon Tokumine
Dasha Valter
Haim-ing Bao
Mo Bavarian
Jeff Belgum
Ir-wan Bello
Jake Berdine
Gabriel Bernadett-Shapiro
Christopher Berner
Lenny Bogdonoff
Oleg Boiko
Madelaine Boyd
Anna-Luisa Brakman
Greg Brock-man
Tim Brooks
M. Brundage
Kevin Button
Trevor Cai
Rosie Campbell
Andrew Cann
Brittany Carey
Chelsea Carlson
Rory Carmichael
Brooke Chan
Che Chang
Fotis Chantzis
Derek Chen
Sully Chen
Ruby Chen
Jason Chen
Mark Chen
Benjamin Chess
Chester Cho
Hyung Casey Chu
Won Chung
Dave Cummings
Jeremiah Currier
Yunxing Dai
Tarun Goel
Gabriel Gogineni
Rapha Goh
Jonathan Gontijo-Lopes
Morgan Gordon
Scott Grafstein
Ryan Gray
Joshua Greene
Shixiang Shane Gross
Yufei Gu
Chris Guo
Jesse Hallacy
Jeff Han
Harris Yuchen
Mike He
Johannes Heaton
C. Heidecke
Alan Hesse
Wade Hickey
Peter Hickey
Hoeschele Brandon
Kenny Houghton
Shengli Hsu
Xin Hu
Joost Hu
Shantanu Huizinga
Shawn Jain
Jain Joanne
Angela Jang
Roger Jiang
Haozhun Jiang
Denny Jin
Shino Jin
Billie Jomoto
Hee-woo Jonn
Tomer Jun
Łukasz Kaftan
Ali Kaiser
Ingmar Ka-mali
Kanitscheider
Nitish Shirish
Keskar Tabarak
Logan Khan
J. Kilpatrick
Kim Christina
Yongjik Kim
Jan Hendrik Kim
Jamie Kirch-ner
Matt Kiros
Daniel Knight
Kokotajlo Łukasz
A. Kondraciuk
Aris Kondrich
Kyle Kon-stantinidis
Gretchen Kosic
Vishal Krueger
Michael Kuo
Ikai Lampe
Teddy Lan
Jan Lee
Jade Leike
Daniel Leung
Chak Ming Levy
Li Rachel
Molly Lim
Stephanie Lin
Mateusz Lin
Theresa Litwin
Ryan Lopez
Patricia Lowe
Lue Anna
Kim Makanju
S. Malfacini
Todor Manning
Yaniv Markov
Bianca Markovski
Katie Martin
Andrew Mayer
Bob Mayne
Scott Mayer McGrew
Christine McKinney
Paul McLeavey
McMillan Jake
David McNeil
Aalok Medina
Jacob Mehta
Luke Menick
Andrey Metz
Pamela Mishchenko
Vinnie Mishkin
Evan Monaco
Daniel Morikawa
Tong Mossing
Mira Mu
Oleg Murati
David Murk
Ashvin Mély
Reiichiro Nair
Rajeev Nakano
Nayak Arvind
Richard Neelakantan
Hyeonwoo Ngo
Noh Long
Cullen Ouyang
Jakub O’Keefe
Alex Pachocki
J. Paino
Ashley Palermo
Pantuliano
Carl Ross
Bob Rotsted
Henri Roussez
Nick Ry-der
Mario Saltarelli
Ted Sanders
Shibani Santurkar
Girish Sastry
Heather Schmidt
David Schnurr
John Schulman
Daniel Selsam
Kyla Sheppard
Toki Sherbakov
Jessica Shieh
Sarah Shoker
Pranav Shyam
Szymon Sidor
Eric Sigler
Maddie Simens
Jordan Sitkin
Katarina Slama
Ian Sohl
Benjamin D. Sokolowsky
Yang Song
Natalie Staudacher
Clemens Winter
Samuel Wolrich
Hannah Wong
Lauren Workman
Sherwin Wu
Michael Wu
Kai Xiao
Tao Xu
Sarah Yoo
Kevin Yu
Qim-ing Yuan
Wojciech Zaremba
Rowan G. Zellers
Chong Zhang
Marvin Zhang
Tianhao Shengjia Zhao
Ouyang Long
Jeff Wu
Xu Jiang
Diogo Almeida
C. Wainwright
Pamela Mishkin
Sandhini Agarwal
Alex Ray
Jacob Hilton
Fraser Kelton
Luke Miller
Amanda Askell
Peter Welinder
Paul F. Christiano
Jan Leike
Ryan Lowe. 2022
Adam Paszke
Sam Gross
Francisco Massa
Adam Lerer
Gregory Chanan
Trevor Killeen
Ze-Bin Lin
Natalia Gimelshein
L. Antiga
Alban Desmaison
Andreas Köpf
Edward Yang
Zachary DeVito
Martin Raison
A. Tejani
Sasank Chilamkurthy
Benoit Steiner
Giovanni Puccetti
Anna Rogers
Aleksandr Drozd
Felice
Dell’Orletta. 2022. Outlier
Alec Radford
Jong Wook Kim
Chris Hallacy
Aditya Ramesh
Gabriel Goh
Girish Sas-try
J. Clark
Rewon Child
David Luan
Victor Sanh
Alex Webson
Colin Raffel
Stephen H. Bach
Lintang A. Sutawika
Zaid Alyafeai
Antoine Chaffin
Arnaud Stiegler
Arun Raja
Saiful Bari
Canwen Xu
Urmish Thakker
Shanya Sharma Sharma
Eliza Szczechla
Taewoon Kim 0002
Gunjan Chhablani
Ni-hal Nayak
Debajyoti Datta
Mike Jonathan Chang
Tian-Jian Jiang
Han Wang
Matteo Manica
Sheng Shen
Zheng-Xin Yong
Harshit Pandey
Rachel Bawden
Thomas Wang
Trishala Neeraj
Jos Rozen
Abheesht Sharma
Thibault Févry
Jason Alan Fries
Ryan Teehan
Teven Le Scao
Stella Biderman
Leo Gao
Thomas Wolf 0008
A. M. R. 2022
Multi-task
Richard Socher
Alex Perelygin
Jean Wu
Jason Chuang
Christopher D Manning
Andrew Ng
Christopher Potts
Recursive
Aarohi Srivastava
Abhinav Rastogi
Abhishek Rao
Abu Awal
Md. Shoeb
Abubakar Abid
Adam Fisch
Adam R. Brown
Adam Santoro
Aditya Gupta
Adrià Garriga-Alonso
Agnieszka Kluska
Aitor Lewkowycz
Akshat Agarwal
Alethea Power
Alex Warstadt
Alexander W. Kocurek
Ali Safaya
Ali Tazarv
Alice Xiang
Alicia Parrish
Allen Nie
Aman Hussain
Amanda Dsouza
Ameet Rahane
Anantharaman S. Iyer
Anders Johan Andreassen
Andrea Madotto
Andrea Santilli
Andreas Stuhlmüller
Andrew La
Andrew Lampinen
Andy Zou
Angela Jiang
Angelica Chen
Anh Vuong
Animesh Gupta
Anna Gottardi
Antonio Norelli
Anu Venkatesh
Arash Gholamidavoodi
Arfa Tabassum
Arul Menezes
Arun Kirubara-jan
Asher Mullokandov
Ashish Sabharwal
Austin Herrick
Avia Efrat
Aykut Erdem
Ayla Karaka¸s
Ryan Roberts
Bao Sheng Loe
Barret Zoph
Bartłomiej Bojanowski
Batuhan Özyurt
Behnam Hedayatnia
Behnam Neyshabur
Benjamin Inden
Benno Stein
Berk Ekmekci
Bill Yuchen
Blake Lin
Bryan Howald
Cameron Orinion
Cameron Diao
Catherine Dour
Cedrick Stinson
César Argueta
Chandan Ferri
Charles Singh
Chenlin Rathkopf
Chitta Meng
C. Baral
Chris Wu
Chris Callison-Burch
Christopher Waites
Christo-pher D Voigt
Cindy Potts
E. RamirezClara
Clemencia Rivera
Colin Siro
Court-ney Raffel
Cristina Ashcraft
Damien Garbacea
Sileo Dan
Dan Garrette
Dan Hendrycks
Dan Kilman
C. Roth
C. Daniel Freeman
Daniel Khashabi
Daniel Moseguí González
Danielle Perszyk
Danny Hernandez
Danqi Chen
StarCoder 2 and The Stack v2: The Next Generation
Anton Lozhkov
Raymond Li
Loubna Ben allal
Federico Cassano
Joel Lamy-Poirier
Nouamane Tazi
Ao Tang
Dmytro Pykhtar
Jiawei Liu
Yuxiang Wei
Tianyang Liu
Max Tian
Denis Kocetkov
Arthur Zucker
Younes Belkada
Zijian Wang
Qian Liu
Dmitry Abulkhanov
Indraneil Paul
Zhuang Li … (voir 46 de plus)
Wen-Ding Li
Megan L. Risdal
Jia LI
Jian Zhu
Terry Yue Zhuo
Evgenii Zheltonozhskii
Nii Osae Osae Dade
Wenhao Yu
Lucas Krauss
Naman Jain
Yixuan Su
Xuanli He
Edoardo Abati
Yekun Chai
Niklas Muennighoff
Xiangru Tang
Muhtasham Oblokulov
Christopher Akiki
Marc Marone
Chenghao Mou
Mayank Mishra
Alex Gu
Binyuan Hui
Tri Dao
Armel Zebaze
Olivier Dehaene
Nicolas Patry
Canwen Xu
Julian McAuley
Han Hu
Torsten Scholak
Sebastien Paquet
Jennifer Robinson
Carolyn Jane Anderson
Md. Mostofa Ali Patwary
Nima Tajbakhsh
Yacine Jernite
Carlos Muñoz Ferrandis
Lingming Zhang
Sean Hughes
Thomas Wolf
Arjun Guha
Leandro Von Werra
The BigCode project, an open-scientific collaboration focused on the responsible development of Large Language Models for Code (Code LLMs), … (voir plus)introduces StarCoder2. In partnership with Software Heritage (SWH), we build The Stack v2 on top of the digital commons of their source code archive. Alongside the SWH repositories spanning 619 programming languages, we carefully select other high-quality data sources, such as GitHub pull requests, Kaggle notebooks, and code documentation. This results in a training set that is 4x larger than the first StarCoder dataset. We train StarCoder2 models with 3B, 7B, and 15B parameters on 3.3 to 4.3 trillion tokens and thoroughly evaluate them on a comprehensive set of Code LLM benchmarks. We find that our small model, StarCoder2-3B, outperforms other Code LLMs of similar size on most benchmarks, and also outperforms StarCoderBase-15B. Our large model, StarCoder2- 15B, significantly outperforms other models of comparable size. In addition, it matches or outperforms CodeLlama-34B, a model more than twice its size. Although DeepSeekCoder- 33B is the best-performing model at code completion for high-resource languages, we find that StarCoder2-15B outperforms it on math and code reasoning benchmarks, as well as several low-resource languages. We make the model weights available under an OpenRAIL license and ensure full transparency regarding the training data by releasing the SoftWare Heritage persistent IDentifiers (SWHIDs) of the source code data.
SantaCoder: don't reach for the stars!
Loubna Ben allal
Raymond Li
Denis Kocetkov
Chenghao Mou
Christopher Akiki
Carlos Muñoz Ferrandis
Niklas Muennighoff
Mayank Mishra
Alex Gu
Logesh Kumar Umapathi
Carolyn Jane Anderson
Yangtian Zi
Joel Lamy Poirier
Hailey Schoelkopf
S. Troshin
Dmitry Abulkhanov
Manuel L. Romero
M. Lappert
Francesco De Toni … (voir 21 de plus)
Bernardo Garc'ia del R'io
Qian Liu
Shamik Bose
Urvashi Bhattacharyya
Terry Yue Zhuo
Ian Yu
Paulo Villegas
Marco Zocca
Sourab Mangrulkar
D. Lansky
Huu Nguyen
Danish Contractor
Luisa Villa
Jia LI
Yacine Jernite
Sean Christopher Hughes
Daniel Fried
Arjun Guha
Leandro Von Werra
The BigCode project is an open-scientific collaboration working on the responsible development of large language models for code. This tech … (voir plus)report describes the progress of the collaboration until December 2022, outlining the current state of the Personally Identifiable Information (PII) redaction pipeline, the experiments conducted to de-risk the model architecture, and the experiments investigating better preprocessing methods for the training data. We train 1.1B parameter models on the Java, JavaScript, and Python subsets of The Stack and evaluate them on the MultiPL-E text-to-code benchmark. We find that more aggressive filtering of near-duplicates can further boost performance and, surprisingly, that selecting files from repositories with 5+ GitHub stars deteriorates performance significantly. Our best model outperforms previous open-source multilingual code generation models (InCoder-6.7B and CodeGen-Multi-2.7B) in both left-to-right generation and infilling on the Java, JavaScript, and Python portions of MultiPL-E, despite being a substantially smaller model. All models are released under an OpenRAIL license at https://hf.co/bigcode.
StarCoder: may the source be with you!
Raymond Li
Loubna Ben allal
Yangtian Zi
Niklas Muennighoff
Denis Kocetkov
Chenghao Mou
Marc Marone
Christopher Akiki
Jia LI
Jenny Chim
Qian Liu
Evgenii Zheltonozhskii
Terry Yue Zhuo
Thomas Wang
Olivier Dehaene
Mishig Davaadorj
Joel Lamy-Poirier
Joao Monteiro
Oleh Shliazhko
Nicolas Gontier … (voir 47 de plus)
Armel Zebaze
Ming-Ho Yee
Logesh Kumar Umapathi
Jian Zhu
Ben Lipkin
Muhtasham Oblokulov
Zhiruo Wang
Rudra Murthy
Jason T Stillerman
Siva Sankalp Patel
Dmitry Abulkhanov
Marco Zocca
Zhihan Zhang
N. Fahmy
Urvashi Bhattacharyya
Wenhao Yu
Swayam Singh
Paulo Villegas
M. Kunakov
Fedor Zhdanov
Manuel Romero
Tony Lee
Nadav Timor
Jennifer Ding
Claire S Schlesinger
Hailey Schoelkopf
Jan Ebert
Tri Dao
Mayank Mishra
Alex Gu
Jennifer Robinson
Carolyn Jane Anderson
Brendan Dolan-Gavitt
Danish Contractor
Daniel Fried
Yacine Jernite
Carlos Muñoz Ferrandis
Sean Hughes
Thomas Wolf
Arjun Guha
Leandro Von Werra
The BigCode community, an open-scientific collaboration working on the responsible development of Large Language Models for Code (Code LLMs)… (voir plus), introduces StarCoder and StarCoderBase: 15.5B parameter models with 8K context length, infilling capabilities and fast large-batch inference enabled by multi-query attention. StarCoderBase is trained on 1 trillion tokens sourced from The Stack, a large collection of permissively licensed GitHub repositories with inspection tools and an opt-out process. We fine-tuned StarCoderBase on 35B Python tokens, resulting in the creation of StarCoder. We perform the most comprehensive evaluation of Code LLMs to date and show that StarCoderBase outperforms every open Code LLM that supports multiple programming languages and matches or outperforms the OpenAI code-cushman-001 model. Furthermore, StarCoder outperforms every model that is fine-tuned on Python and still retains its performance on other programming languages. We take several important steps towards a safe open-access model release, including an improved PII redaction pipeline and a novel attribution tracing tool, and make the StarCoder models publicly available under a more commercially viable version of the Open Responsible AI Model license.
BLOOM: A 176B-Parameter Open-Access Multilingual Language Model
Teven Le Scao
Angela Fan
Christopher Akiki
Ellie Pavlick
Suzana Ili'c
Daniel Hesslow
Roman Castagn'e
Alexandra Luccioni
François Yvon
Matthias Gall'e
J. Tow
Alexander M. Rush
Stella Biderman
Alex Webson
Pawan Sasanka Ammanamanchi
Thomas Wang
Benoı̂t Sagot
Niklas Muennighoff
Albert Villanova del Moral
Olatunji Ruwase … (voir 372 de plus)
Rachel Bawden
Stas Bekman
Angelina McMillan-Major
Iz Beltagy
Huu Nguyen
Lucile Saulnier
Samson Tan
Pedro Ortiz Suarez
Victor Sanh
Hugo Laurençon
Yacine Jernite
Julien Launay
Margaret Mitchell
Colin Raffel
Aaron Gokaslan
Adi Simhi
Aitor Soroa
Alham Fikri Aji
Amit Alfassy
Anna Rogers
Ariel Kreisberg Nitzav
Canwen Xu
Chenghao Mou
Christopher Klamm
Colin D. Leong
Daniel Van Strien
Dragomir R. Radev
Eduardo González Ponferrada
Efrat Levkovizh
Ethan Kim
Eyal Bar Natan
Francesco De Toni
Gérard Dupont
Germán Kruszewski
Giada Pistilli
Hady Elsahar
Hamza Benyamina
Hieu Tran
Ian W. Yu
Idris Abdulmumin
Isaac L. Johnson
Itziar Gonzalez-Dios
Javier de la Rosa
Jenny Chim
Jesse Dodge
Jian Zhu
Jonathan Chang
Jörg Frohberg
Josephine L. Tobing
J. Bhattacharjee
Khalid Almubarak
Kimbo Chen
Kyle Lo
Leandro Von Werra
Leon Weber
Long Phan
Loubna Ben allal
Ludovic Tanguy
Manuel Romero Muñoz
Maraim Masoud
Mar'ia Grandury
Mario Šaško
Max Huang
Maximin Coavoux
Mayank Singh
Mike Tian-Jian Jiang
Vu Minh Chien
Mohammad Ali Jauhar
Mustafa Ghaleb
Nishant Subramani
Nora Kassner
Nurulaqilla Khamis
Olivier Nguyen
Omar Espejel
Ona de Gibert
Paulo Villegas
Pierre Colombo
Priscilla A. Amuok
Quentin Lhoest
Rheza Harliman
Rishi Bommasani
Roberto Luis L'opez
Rui Ribeiro
Salomey Osei
Sampo Pyysalo
Sebastian Nagel
Shamik Bose
Shamsuddeen Hassan Muhammad
Shanya Sharma Sharma
Shayne Longpre
Somaieh Nikpoor
S. Silberberg
Suhas Pai
Sydney Zink
Tiago Timponi Torrent
Timo Schick
Tristan Thrush
Valentin Danchev
Vassilina Nikoulina
Veronika Laippala
Violette Lepercq
Vrinda Prabhu
Zaid Alyafeai
Zeerak Talat
Arun Raja
Benjamin Heinzerling
Chenglei Si
Elizabeth E Salesky
Sabrina J. Mielke
Wilson Y. Lee
Abheesht Sharma
Andrea Santilli
Antoine Chaffin
Arnaud Stiegler
Debajyoti Datta
Eliza Szczechla
Gunjan Chhablani
Han Wang
Harshit Pandey
Hendrik. Strobelt
Jason Alan Fries
Jos Rozen
Leo Gao
Lintang A. Sutawika
M. Saiful Bari
Maged S. Al-shaibani
Matteo Manica
Nihal V. Nayak
Ryan Teehan
Samuel Albanie
Sheng Shen
Srulik Ben-David
Stephen H. Bach
Taewoon Kim
T. Bers
Thibault F'evry
Trishala Neeraj
Urmish Thakker
Vikas Raunak
Xiang Tang
Zheng Xin Yong
Zhiqing Sun
Shaked Brody
Y. Uri
Hadar Tojarieh
Adam Roberts
Hyung Won Chung
Jaesung Tae
Jason Phang
Ofir Press
Conglong Li
D. Narayanan
Hatim Bourfoune
Jared Casper
Jeff Rasley
Max Ryabinin
Mayank Mishra
Minjia Zhang
Mohammad Shoeybi
Myriam Peyrounette
Nicolas Patry
Nouamane Tazi
Omar Sanseviero
Patrick von Platen
Pierre Cornette
Pierre Franccois Lavall'ee
R'emi Lacroix
Samyam Rajbhandari
Sanchit Gandhi
Shaden Smith
St'ephane Requena
Suraj Patil
Tim Dettmers
Ahmed Baruwa
Amanpreet Singh
Anastasia Cheveleva
Anne-Laure Ligozat
Arjun Subramonian
Aur'elie N'ev'eol
Charles Lovering
Dan Garrette
D. Tunuguntla
Ehud Reiter
Ekaterina Taktasheva
E. Voloshina
Eli Bogdanov
Genta Indra Winata
Hailey Schoelkopf
Jan-Christoph Kalo
Jekaterina Novikova
Jessica Zosa Forde
Xiangru Tang
Jungo Kasai
Ken Kawamura
Liam Hazan
Marine Carpuat
Miruna-adriana Clinciu
Najoung Kim
Newton Cheng
O. Serikov
Omer Antverg
Oskar van der Wal
Rui Zhang
Ruochen Zhang
Sebastian Gehrmann
Shachar Mirkin
S. Pais
Tatiana Shavrina
Thomas Scialom
Tian Yun
Tomasz Limisiewicz
Verena Teresa Rieser
Vitaly Protasov
V. Mikhailov
Yada Pruksachatkun
Yonatan Belinkov
Zachary Bamberger
Zdenˇek Kasner
A. Pestana
Amir Feizpour
Ammar Khan
Amy Faranak
A. Santos
Anthony Hevia
Antigona Unldreaj
Arash Aghagol
Arezoo Abdollahi
Aycha Tammour
Azadeh Hajihosseini
Bahareh Behroozi
Benjamin A. Ajibade
B. Saxena
Carlos Muñoz Ferrandis
Danish Contractor
D. Lansky
Davis David
Douwe Kiela
Duong Anh Nguyen
Edward Chwee Kheng. Tan
Emi Baylor
Ezinwanne Ozoani
F. Mirza
Frankline Ononiwu
Habib Rezanejad
H.A. Jones
Indrani Bhattacharya
Irene Solaiman
Irina Sedenko
Isar Nejadgholi
J. Passmore
Joshua Seltzer
Julio Bonis Sanz
Karen Fort
Livia Macedo Dutra
Mairon Samagaio
Maraim Elbadri
Margot Mieskes
Marissa Kumar Gerchick
Martha Akinlolu
Michael McKenna
Mike Qiu
M. Ghauri
Mykola Burynok
Nafis Abrar
Nazneen Fatema Rajani
Nour Elkott
N. Fahmy
Olanrewaju Samuel
Ran An
R. Kromann
Ryan Hao
Samira Hassan Alizadeh
Sarmad Shubber
Silas L. Wang
Sourav Roy
Sylvain Viguier
Thanh-Cong Le
Tobi Oyebade
T. Le
Yoyo Yang
Zach Nguyen
Abhinav R. Kashyap
Alfredo Palasciano
Alison Callahan
Anima Shukla
Antonio Miranda-Escalada
Ayush Singh
Benjamin Beilharz
Bo Wang
Caio Matheus Fonseca De Brito
Chenxi Zhou
Chirag Jain
Chuxin Xu
Cl'ementine Fourrier
Daniel Le'on Perin'an
Daniel Molano
Dian Yu
Enrique Manjavacas
Fabio Barth
Florian Fuhrimann
Gabriel Altay
Giyaseddin Bayrak
Gully Burns
Helena U. Vrabec
I. Bello
Isha Dash
J. Kang
John Michael Giorgi
Jonas Golde
J. Posada
Karthi Sivaraman
Lokesh Bulchandani
Li Li
Luisa Shinzato
Madeleine Hahn de Bykhovetz
Maiko Takeuchi
Marc Pamies
M. A. Castillo
Marianna Nezhurina
Mario Sanger
Matthias Samwald
Michael Joseph Cullan
Michael Weinberg
Michiel De Wolf
Mina Mihaljcic
Minna Liu
Moritz Freidank
Myungsun Kang
Natasha Seelam
Nathan Dahlberg
Nicholas Michio Broad
Nikolaus Muellner
Pascale Fung
Patricia Haller
Ramya Chandrasekhar
Patrick Haller
Renata Eisenberg
Robert Martin
Rodrigo Canalli
Rosaline Su
Ruisi Su
Samuel Cahyawijaya
Samuele Garda
Shlok S Deshmukh
Shubhanshu Mishra
Sid Kiblawi
Simon Ott
Sinee Sang-aroonsiri
Srishti Kumar
Stefan Schweter
Sushil Pratap Bharati
Tanmay Laud
Th'eo Gigant
Tomoya Kainuma
Wojciech Kusa
Yanis Labrak
Yashasvi Bajaj
Yash Venkatraman
Yifan Xu
Ying Xu
Yu Xu
Zhijun Tan
Zhongli Xie
Zifan Ye
Mathilde Le Bras
Younes Belkada
Thomas Wolf
The BigScience ROOTS Corpus: A 1.6TB Composite Multilingual Dataset
Hugo Laurençon
Lucile Saulnier
Thomas Wang
Christopher Akiki
Albert Villanova del Moral
Teven Le Scao
Leandro Von Werra
Chenghao Mou
Eduardo González Ponferrada
Huu Nguyen
Jörg Frohberg
Mario Šaško
Quentin Lhoest
Angelina McMillan-Major
Gérard Dupont
Stella Biderman
Anna Rogers
Loubna Ben allal
Francesco De Toni
Giada Pistilli … (voir 34 de plus)
Olivier Nguyen
Somaieh Nikpoor
Maraim Masoud
Pierre Colombo
Javier de la Rosa
Paulo Villegas
Tristan Thrush
Shayne Longpre
Sebastian Nagel
Leon Weber
Manuel Romero Muñoz
Jian Zhu
Daniel Van Strien
Zaid Alyafeai
Khalid Almubarak
Vu Minh Chien
Itziar Gonzalez-Dios
Aitor Soroa
Kyle Lo
Pedro Ortiz Suarez
Aaron Gokaslan
Shamik Bose
Long Phan
Hieu Tran
Ian Yu
Suhas Pai
Jenny Chim
Violette Lepercq
Suzana Ilic
Margaret Mitchell
Yacine Jernite
As language models grow ever larger, the need for large-scale high-quality text datasets has never been more pressing, especially in multili… (voir plus)ngual settings. The BigScience workshop, a 1-year international and multidisciplinary initiative, was formed with the goal of researching and training large language models as a values-driven undertaking, putting issues of ethics, harm, and governance in the foreground. This paper documents the data creation and curation efforts undertaken by BigScience to assemble the Responsible Open-science Open-collaboration Text Sources (ROOTS) corpus, a 1.6TB dataset spanning 59 languages that was used to train the 176-billion-parameter BigScience Large Open-science Open-access Multilingual (BLOOM) language model. We further release a large initial subset of the corpus and analyses thereof, and hope to empower large-scale monolingual and multilingual modeling projects with both the data and the processing tools, as well as stimulate research around this large multilingual corpus.