Portrait de Jay Gala

Jay Gala

Maîtrise recherche - McGill
Superviseur⋅e principal⋅e
Co-supervisor
Sujets de recherche
Apprentissage de représentations
Apprentissage multimodal
Apprentissage profond
Grands modèles de langage (LLM)
Modèles génératifs
Traitement du langage naturel
Vision et langage

Publications

MMTEB: Massive Multilingual Text Embedding Benchmark
Kenneth Enevoldsen
Isaac Chung
Márton Kardos
Ashwin Mathur
David Stap
Wissam Siblini
Dominik Krzemiński
Genta Indra Winata
Saba Sturua
Saiteja Utpala
Mathieu Ciancone
Marion Schaeffer
Gabriel Sequeira
Shreeya Dhakal
Jonathan Rystrøm
Roman Solomatin
Ömer Veysel Çağatan … (voir 66 de plus)
Akash Kundu
Martin Bernstorff
Shitao Xiao
Akshita Sukhlecha
Bhavish Pahwa
Rafał Poświata
Kranthi Kiran GV
Shawon Ashraf
Daniel Auras
Björn Plüster
Jan Philipp Harries
Loïc Magne
Isabelle Mohr
Mariya Hendriksen
Dawei Zhu
Hippolyte Gisserot-Boukhlef
Tom Aarsen
Jan Kostkan
Konrad Wojtasik
Taemin Lee
Marek Suppa
Crystina Zhang
Roberta Rocca
Mohammed Hamdy
Andrianos Michail
John Yang
Manuel Faysse
Aleksei Vatolin
Nandan Thakur
Dipam Vasani
Pranjal A Chitale
Simone Tedeschi
Nguyen Tai
Artem Snegirev
Michael Günther
Mengzhou Xia
Weijia Shi
Jordan Clive
Gayatri K
Maksimova Anna
Silvan Wehrli
Maria Tikhonova
Henil Shalin Panchal
Aleksandr Abramov
Malte Ostendorff
Zheng Liu
Simon Clematide
Lester James Validad Miranda
Alena Fenogenova
Guangyu Song
Ruqiya Bin Safi
Wen-Ding Li
Alessia Borghini
Federico Cassano
Hongjin Su
Jimmy Lin
Howard Yen
Lasse Hansen
Sara Hooker
Chenghao Xiao
Orion Weller
Niklas Muennighoff
Text embeddings are typically evaluated on a limited set of tasks, which are constrained by language, domain, and task diversity. To address… (voir plus) these limitations and provide a more comprehensive evaluation, we introduce the Massive Multilingual Text Embedding Benchmark (MMTEB) - a large-scale, community-driven expansion of MTEB, covering over 500 quality-controlled evaluation tasks across 250+ languages. MMTEB includes a diverse set of challenging, novel tasks such as instruction following, long-document retrieval, and code retrieval, representing the largest multilingual collection of evaluation tasks for embedding models to date. Using this collection, we develop several highly multilingual benchmarks, which we use to evaluate a representative set of models. We find that while large language models (LLMs) with billions of parameters can achieve state-of-the-art performance on certain language subsets and task categories, the best-performing publicly available model is multilingual-e5-large-instruct with only 560 million parameters. To facilitate accessibility and reduce computational cost, we introduce a novel downsampling method based on inter-task correlation, ensuring a diverse selection while preserving relative model rankings. Furthermore, we optimize tasks such as retrieval by sampling hard negatives, creating smaller but effective splits. These optimizations allow us to introduce benchmarks that drastically reduce computational demands. For instance, our newly introduced zero-shot English benchmark maintains a ranking order similar to the full-scale version but at a fraction of the computational cost.
MMTEB: Massive Multilingual Text Embedding Benchmark
Kenneth Enevoldsen
Isaac Chung
Márton Kardos
Ashwin Mathur
David Stap
Wissam Siblini
Dominik Krzemiński
Genta Indra Winata
Saba Sturua
Saiteja Utpala
Mathieu Ciancone
Marion Schaeffer
Gabriel Sequeira
Shreeya Dhakal
Jonathan Rystrøm
Roman Solomatin
Ömer Veysel Çağatan … (voir 66 de plus)
Akash Kundu
Martin Bernstorff
Shitao Xiao
Akshita Sukhlecha
Bhavish Pahwa
Rafał Poświata
Kranthi Kiran GV
Shawon Ashraf
Daniel Auras
Björn Plüster
Jan Philipp Harries
Loïc Magne
Isabelle Mohr
Mariya Hendriksen
Dawei Zhu
Hippolyte Gisserot-Boukhlef
Tom Aarsen
Jan Kostkan
Konrad Wojtasik
Taemin Lee
Marek Suppa
Crystina Zhang
Roberta Rocca
Mohammed Hamdy
Andrianos Michail
John Yang
Manuel Faysse
Aleksei Vatolin
Nandan Thakur
Dipam Vasani
Pranjal A Chitale
Simone Tedeschi
Nguyen Tai
Artem Snegirev
Michael Günther
Mengzhou Xia
Weijia Shi
Jordan Clive
Gayatri K
Maksimova Anna
Silvan Wehrli
Maria Tikhonova
Henil Shalin Panchal
Aleksandr Abramov
Malte Ostendorff
Zheng Liu
Simon Clematide
Lester James Validad Miranda
Alena Fenogenova
Guangyu Song
Ruqiya Bin Safi
Wen-Ding Li
Alessia Borghini
Federico Cassano
Hongjin Su
Jimmy Lin
Howard Yen
Lasse Hansen
Sara Hooker
Chenghao Xiao
Orion Weller
Niklas Muennighoff
Text embeddings are typically evaluated on a narrow set of tasks, limited in terms of languages, domains, and task types. To circumvent this… (voir plus) limitation and to provide a more comprehensive evaluation, we introduce the Massive Multilingual Text Embedding Benchmark (MMTEB) -- a large-scale community-driven initiative expanding MTEB to over 500 quality-controlled evaluation tasks across 1,000+ languages. MMTEB includes a wide range of challenging novel tasks such as instruction following, long-document retrieval, and code retrieval, and represents the largest multilingual collection of evaluation tasks for embedding models to date. We use this collection to construct multiple highly multilingual benchmarks. We evaluate a representative set of models on these benchmarks. Our findings indicate that, while LLM-based models can achieve state-of-the-art performance on a subset of languages, the best-performing publicly available model across languages is the notably smaller, multilingual-e5-large-instruct. Massive benchmarks often impose high computational demands, limiting accessibility, particularly for low-resource communities. To address this, we downsample tasks based on inter-task correlation (i.e., selecting only a diverse set of tasks) while preserving relative rankings. We further optimize tasks such as retrieval by sampling hard negatives, creating smaller but effective splits. These optimizations allow us to introduce benchmarks at a significantly lower computational cost. For instance, we introduce a new zero-shot English benchmark that maintains a similar ordering at a fraction of the cost.
MMTEB: Massive Multilingual Text Embedding Benchmark
Kenneth Enevoldsen
Isaac Chung
Márton Kardos
Ashwin Mathur
David Stap
Wissam Siblini
Dominik Krzemiński
Genta Indra Winata
Saba Sturua
Saiteja Utpala
Mathieu Ciancone
Marion Schaeffer
Shreeya Dhakal
Jonathan Rystrøm
Roman Solomatin
Ömer Veysel Çağatan
Akash Kundu … (voir 62 de plus)
Martin Bernstorff
Shitao Xiao
Akshita Sukhlecha
Bhavish Pahwa
Rafał Poświata
Kranthi Kiran GV
Shawon Ashraf
Daniel Auras
Björn Plüster
Jan Philipp Harries
Loïc Magne
Isabelle Mohr
Dawei Zhu
Hippolyte Gisserot-Boukhlef
Tom Aarsen
Jan Kostkan
Konrad Wojtasik
Taemin Lee
Marek Suppa
Crystina Zhang
Roberta Rocca
Mohammed Hamdy
Andrianos Michail
John Yang
Manuel Faysse
Aleksei Vatolin
Nandan Thakur
Dipam Vasani
Pranjal A Chitale
Simone Tedeschi
Nguyen Tai
Artem Snegirev
Mariya Hendriksen
Michael Günther
Mengzhou Xia
Weijia Shi
Jordan Clive
Gayatri K
Maksimova Anna
Silvan Wehrli
Maria Tikhonova
Henil Shalin Panchal
Aleksandr Abramov
Malte Ostendorff
Zheng Liu
Simon Clematide
Lester James Validad Miranda
Alena Fenogenova
Guangyu Song
Ruqiya Bin Safi
Wen-Ding Li
Alessia Borghini
Federico Cassano
Lasse Hansen
Sara Hooker
Chenghao Xiao
Orion Weller
Niklas Muennighoff
Text embeddings are typically evaluated on a narrow set of tasks, limited in terms of languages, domains, and task types. To circumvent this… (voir plus) limitation and to provide a more comprehensive evaluation, we introduce the Massive Multilingual Text Embedding Benchmark (MMTEB) -- a large-scale community-driven initiative expanding MTEB to over 500 quality-controlled evaluation tasks across 1,000+ languages. MMTEB includes a wide range of challenging novel tasks such as instruction following, long-document retrieval, and code retrieval, and represents the largest multilingual collection of evaluation tasks for embedding models to date. We use this collection to construct multiple highly multilingual benchmarks. We evaluate a representative set of models on these benchmarks. Our findings indicate that, while LLM-based models can achieve state-of-the-art performance on a subset of languages, the best-performing publicly available model across languages is the notably smaller, multilingual-e5-large-instruct. Massive benchmarks often impose high computational demands, limiting accessibility, particularly for low-resource communities. To address this, we downsample tasks based on inter-task correlation (i.e., selecting only a diverse set of tasks) while preserving relative rankings. We further optimize tasks such as retrieval by sampling hard negatives, creating smaller but effective splits. These optimizations allow us to introduce benchmarks at a significantly lower computational cost. For instance, we introduce a new zero-shot English benchmark that maintains a similar ordering at a fraction of the cost.
CVQA: Culturally-diverse Multilingual Visual Question Answering Benchmark
David Orlando Romero Mogrovejo
Chenyang Lyu
Haryo Akbarianto Wibowo
Santiago Góngora
Aishik Mandal
Sukannya Purkayastha
Jesus-German Ortiz-Barajas
Emilio Villa Cueva
Jinheon Baek
Soyeong Jeong
Injy Hamed
Zheng Xin Yong
Zheng Wei Lim
Paula Mónica Silva
Jocelyn Dunstan
D. Meur
Mélanie Jouitteau
David LE MEUR
Joan Nwatu … (voir 57 de plus)
Ganzorig Batnasan
Munkh-Erdene Otgonbold
Munkhjargal Gochoo
Guido Ivetta
Luciana Benotti
Laura Alonso Alemany
Hernán Maina
Jiahui Geng
Tiago Timponi Torrent
Frederico Belcavello
Israel Abebe Azime
Marcelo Viridiano
Jan Christian Blaise Cruz
Dan John Velasco
Zara Burzo
Chenxi Whitehouse
Artem Abzaliev
Teresa Clifford
Gráinne Caulfield
Teresa Lynn
Christian Salamea-Palacios
Yova Kementchedjhieva
Mihail Minkov Mihaylov
Henok Biadglign Ademtew
Bontu Fufa Balcha
Rada Mihalcea
Atnafu Lambebo Tonja
Maria Camila Buitrago Cabrera
Naome Etori
Gisela Vallejo
Holy Lovenia
Ruochen Zhang
Marcos Estecha-Garitagoitia
Mario Rodríguez-Cantelar
Toqeer Ehsan
Rendi Chevi
Muhammad Farid Adilazuarda
Ryandito Diandaru
Samuel Cahyawijaya
Fajri Koto
Tatsuki Kuribayashi
Haiyue Song
Aditya Nanda Kishore Khandavally
Thanmay Jayakumar
Vladimir Araujo
Raj Dabre
Mohamed Fazli Mohamed Imam
Kumaranage Ravindu Yasas Nagasinghe
Alina Dragonetti
Luis Fernando D'Haro
Oana Ignat
Olivier NIYOMUGISHA
Pranjal A Chitale
Fauzan Farooqui
Alham Fikri Aji
Thamar Solorio