Portrait de Dzmitry Bahdanau

Dzmitry Bahdanau

Membre industriel principal
Chaire en IA Canada-CIFAR
Professeur adjoint, McGill University, École d'informatique
Chercheur scientifique IA, ServiceNow

Biographie

Dzmitry Bahdanau est professeur adjoint à l’Université McGill et chercheur à ServiceNow Element AI. Précédemment, il a obtenu son doctorat à l'Université de Montréal / Mila – Institut québécois d’intelligence artificielle en travaillant avec Yoshua Bengio. Il s'intéresse aux questions fondamentales et appliquées concernant la compréhension du langage naturel. Ses principaux domaines de recherche comprennent l'analyse sémantique, les interfaces utilisateur du langage, la généralisation systématique et les systèmes hybrides neuronaux symboliques.

Étudiants actuels

Maîtrise recherche - McGill University
Maîtrise recherche - McGill University

Publications

LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders
Parishad BehnamGhader
Vaibhav Adlakha
Marius Mosbach
Synthetic Data Generation and Joint Learning for Robust Code-Mixed Translation
Hi Bn
Ramakrishna Appicharla
Kamal Kumar
Asif Gupta
Kyunghyun Cho
Yoshua Ben­
Ondrej Bojar
Christian Buck
Christian Federmann
Yong Cheng
Lu Jiang
Wolfgang Macherey
Alexis Conneau
Guillaume Lample. 2019
Cross­
Yinhan Liu
Jiatao Gu
Naman Goyal
Sergey Xian Li … (voir 45 de plus)
Carol Myers­Scotton. 1997
El Moatez
Billah Nagoudi
AbdelRahim Elmadany
Muhammad Abdul­Mageed. 2021. Investigat­
Myle Ott
Sergey Edunov
Alexei R Baevski
Parth Patwa
Gustavo Aguilar
Sudipta Kar
Suraj
Srinivas Pandey
Björn Pykl
Gambäck
Tanmoy
Ashish Vaswani
Noam M. Shazeer
Niki Parmar
dukasz Kaiser
Illia Polosukhin. 2017
Attention
Genta Indra Winata
Andrea Madotto
Chien­Sheng
Wu Pascale
Fung
Code­switching
ing. In
Felix Wu
Angela Fan
Yann Dauphin
Linting Xue
Noah Constant
Mihir Adam Roberts
Rami Kale
Aditya Al­Rfou
Aditya Siddhant
Barua
Shuyan Zhou
Xiangkai Zeng
Antonios Yingqi Zhou
Anastasopoulos Graham
Neubig. 2019
Im­
The widespread online communication in a modern multilingual world has provided opportunities to blend more than one language (aka code-mixe… (voir plus)d language) in a single utterance. This has resulted a formidable challenge for the computational models due to the scarcity of annotated data and presence of noise. A potential solution to mitigate the data scarcity problem in low-resource setup is to leverage existing data in resource-rich language through translation. In this paper, we tackle the problem of code-mixed (Hinglish and Bengalish) to English machine translation. First, we synthetically develop HINMIX, a parallel corpus of Hinglish to English, with ~4.2M sentence pairs. Subsequently, we propose RCMT, a robust perturbation based joint-training model that learns to handle noise in the real-world code-mixed text by parameter sharing across clean and noisy words. Further, we show the adaptability of RCMT in a zero-shot setup for Bengalish to English translation. Our evaluation and comprehensive analyses qualitatively and quantitatively demonstrate the superiority of RCMT over state-of-the-art code-mixed and robust translation methods.
Self-evaluation and self-prompting to improve the reliability of LLMs
Alexandre Piché
Aristides Milios
In order to safely deploy Large Language Models (LLMs), they must be capable of dynamically adapting their behavior based on their level of … (voir plus)knowledge and uncertainty associated with specific topics. This adaptive behavior, which we refer to as self-restraint, is non-trivial to teach since it depends on the internal knowledge of an LLM. By default, LLMs are trained to maximize the next token likelihood which does not teach the model to modulate its answer based on its level of uncertainty. In order to learn self-restraint, we devise a simple objective that can encourage the model to produce generation that the model is confident in. To optimize this objective, we introduce ReSearch, an iterative search algorithm based on self-evaluation and self-prompting. Our method results in fewer hallucinations overall, both for known and unknown topics, as the model learns to selectively restrain itself. In addition, our method elegantly incorporates the ability to decline, when the model assesses that it cannot provide a response without a high proportion of hallucination.
Self-evaluation and self-prompting to improve the reliability of LLMs
Alexandre Piché
Aristides Milios
In order to safely deploy Large Language Models (LLMs), they must be capable of dynamically adapting their behavior based on their level of … (voir plus)knowledge and uncertainty associated with specific topics. This adaptive behavior, which we refer to as self-restraint, is non-trivial to teach since it depends on the internal knowledge of an LLM. By default, LLMs are trained to maximize the next token likelihood which does not teach the model to modulate its answer based on its level of uncertainty. In order to learn self-restraint, we devise a simple objective that can encourage the model to produce generation that the model is confident in. To optimize this objective, we introduce ReSearch, an iterative search algorithm based on self-evaluation and self-prompting. Our method results in fewer hallucinations overall, both for known and unknown topics, as the model learns to selectively restrain itself. In addition, our method elegantly incorporates the ability to decline, when the model assesses that it cannot provide a response without a high proportion of hallucination.
StarCoder: may the source be with you!
Raymond Li
Loubna Ben allal
Yangtian Zi
Niklas Muennighoff
Denis Kocetkov
Chenghao Mou
Marc Marone
Christopher Akiki
Jia LI
Jenny Chim
Qian Liu
Evgenii Zheltonozhskii
Terry Yue Zhuo
Thomas Wang
Olivier Dehaene
Mishig Davaadorj
Joel Lamy-Poirier
Joao Monteiro
Oleh Shliazhko
Ming-Ho Yee … (voir 49 de plus)
Nicolas Gontier
Jian Zhu
Nicholas Meade
Armel Zebaze
Logesh Kumar Umapathi
Ben Lipkin
Muhtasham Oblokulov
Zhiruo Wang
Rudra Murthy
Jason T Stillerman
Siva Sankalp Patel
Dmitry Abulkhanov
Marco Zocca
Manan Dey
Zhihan Zhang
N. Fahmy
Urvashi Bhattacharyya
Wenhao Yu
Swayam Singh
Sasha Luccioni
Paulo Villegas
Jan Ebert
M. Kunakov
Fedor Zhdanov
Manuel Romero
Tony Lee
Nadav Timor
Jennifer Ding
Claire S Schlesinger
Hailey Schoelkopf
Jana Ebert
Tri Dao
Mayank Mishra
Alex Gu
Jennifer Robinson
Sean Hughes
Carolyn Jane Anderson
Brendan Dolan-Gavitt
Danish Contractor
Daniel Fried
Yacine Jernite
Carlos Muñoz Ferrandis
Sean M. Hughes
Thomas Wolf
Arjun Guha
Leandro Von Werra
Harm de Vries
The BigCode community, an open-scientific collaboration working on the responsible development of Large Language Models for Code (Code LLMs)… (voir plus), introduces StarCoder and StarCoderBase: 15.5B parameter models with 8K context length, infilling capabilities and fast large-batch inference enabled by multi-query attention. StarCoderBase is trained on 1 trillion tokens sourced from The Stack, a large collection of permissively licensed GitHub repositories with inspection tools and an opt-out process. We fine-tuned StarCoderBase on 35B Python tokens, resulting in the creation of StarCoder. We perform the most comprehensive evaluation of Code LLMs to date and show that StarCoderBase outperforms every open Code LLM that supports multiple programming languages and matches or outperforms the OpenAI code-cushman-001 model. Furthermore, StarCoder outperforms every model that is fine-tuned on Python and still retains its performance on other programming languages. We take several important steps towards a safe open-access model release, including an improved PII redaction pipeline and a novel attribution tracing tool, and make the StarCoder models publicly available under a more commercially viable version of the Open Responsible AI Model license.
Evaluating In-Context Learning of Libraries for Code Generation
Arkil Patel
Pradeep Dasigi
In-Context Learning for Text Classification with Many Labels
Aristides Milios
RepoFusion: Training Code Models to Understand Your Repository
Disha Shrivastava
Denis Kocetkov
Harm de Vries
Torsten Scholak
Despite the huge success of Large Language Models (LLMs) in coding assistants like GitHub Copilot, these models struggle to understand the c… (voir plus)ontext present in the repository (e.g., imports, parent classes, files with similar names, etc.), thereby producing inaccurate code completions. This effect is more pronounced when using these assistants for repositories that the model has not seen during training, such as proprietary software or work-in-progress code projects. Recent work has shown the promise of using context from the repository during inference. In this work, we extend this idea and propose RepoFusion, a framework to train models to incorporate relevant repository context. Experiments on single-line code completion show that our models trained with repository context significantly outperform much larger code models as CodeGen-16B-multi (
The Stack: 3 TB of permissively licensed source code
Denis Kocetkov
Raymond Li
Loubna Ben allal
Jia LI
Chenghao Mou
Carlos Muñoz Ferrandis
Yacine Jernite
Margaret Mitchell
Sean Hughes
Thomas Wolf
Leandro Von Werra
Harm de Vries
Large Language Models (LLMs) play an ever-increasing role in the field of Artificial Intelligence (AI)--not only for natural language proces… (voir plus)sing but also for code understanding and generation. To stimulate open and responsible research on LLMs for code, we introduce The Stack, a 3.1 TB dataset consisting of permissively licensed source code in 30 programming languages. We describe how we collect the full dataset, construct a permissively licensed subset, present a data governance plan, discuss limitations, and show promising results on text2code benchmarks by training 350M-parameter decoders on different Python subsets. We find that (1) near-deduplicating the data significantly boosts performance across all experiments, and (2) it is possible to match previously reported HumanEval and MBPP performance using only permissively licensed data. We make the dataset available at https://hf.co/BigCode, provide a tool called"Am I in The Stack"(https://hf.co/spaces/bigcode/in-the-stack) for developers to search The Stack for copies of their code, and provide a process for code to be removed from the dataset by following the instructions at https://www.bigcode-project.org/docs/about/the-stack/.
SantaCoder: don't reach for the stars!
Loubna Ben allal
Raymond Li
Denis Kocetkov
Chenghao Mou
Christopher Akiki
Carlos Muñoz Ferrandis
Niklas Muennighoff
Mayank Mishra
Alex Gu
Manan Dey
Logesh Kumar Umapathi
Carolyn Jane Anderson
Yangtian Zi
Joel Lamy Poirier
Hailey Schoelkopf
S. Troshin
Dmitry Abulkhanov
Manuel L. Romero
M. Lappert
Francesco De Toni … (voir 21 de plus)
Bernardo Garc'ia del R'io
Qian Liu
Shamik Bose
Urvashi Bhattacharyya
Terry Yue Zhuo
Ian Yu
Paulo Villegas
Marco Zocca
Sourab Mangrulkar
D. Lansky
Huu Nguyen
Danish Contractor
Luisa Villa
Jia LI
Yacine Jernite
Sean Christopher Hughes
Daniel Fried
Arjun Guha
Harm de Vries
Leandro Von Werra
The BigCode project is an open-scientific collaboration working on the responsible development of large language models for code. This tech … (voir plus)report describes the progress of the collaboration until December 2022, outlining the current state of the Personally Identifiable Information (PII) redaction pipeline, the experiments conducted to de-risk the model architecture, and the experiments investigating better preprocessing methods for the training data. We train 1.1B parameter models on the Java, JavaScript, and Python subsets of The Stack and evaluate them on the MultiPL-E text-to-code benchmark. We find that more aggressive filtering of near-duplicates can further boost performance and, surprisingly, that selecting files from repositories with 5+ GitHub stars deteriorates performance significantly. Our best model outperforms previous open-source multilingual code generation models (InCoder-6.7B and CodeGen-Multi-2.7B) in both left-to-right generation and infilling on the Java, JavaScript, and Python portions of MultiPL-E, despite being a substantially smaller model. All models are released under an OpenRAIL license at https://hf.co/bigcode.
MAGNIFICo: Evaluating the In-Context Learning Ability of Large Language Models to Generalize to Novel Interpretations
Arkil Patel
Satwik Bhattamishra
PromptMix: A Class Boundary Augmentation Method for Large Language Model Distillation
Gaurav Sahu
Olga Vechtomova
Issam Hadj Laradji
Data augmentation is a widely used technique to address the problem of text classification when there is a limited amount of training data. … (voir plus)Recent work often tackles this problem using large language models (LLMs) like GPT3 that can generate new examples given already available ones. In this work, we propose a method to generate more helpful augmented data by utilizing the LLM's abilities to follow instructions and perform few-shot classifications. Our specific PromptMix method consists of two steps: 1) generate challenging text augmentations near class boundaries; however, generating borderline examples increases the risk of false positives in the dataset, so we 2) relabel the text augmentations using a prompting-based LLM classifier to enhance the correctness of labels in the generated data. We evaluate the proposed method in challenging 2-shot and zero-shot settings on four text classification datasets: Banking77, TREC6, Subjectivity (SUBJ), and Twitter Complaints. Our experiments show that generating and, crucially, relabeling borderline examples facilitates the transfer of knowledge of a massive LLM like GPT3.5-turbo into smaller and cheaper classifiers like DistilBERT