Portrait of David Ifeoluwa Adelani

David Ifeoluwa Adelani

Core Academic Member
Canada CIFAR AI Chair
McGill University
Research Topics
Deep Learning
Natural Language Processing
Representation Learning
Speech Processing

Biography

David Adelani is an assistant professor at McGill University’s School of Computer Science under the Fighting Inequities initiative, and a core academic member of Mila – Quebec Artificial Intelligence Institute.

Adelani’s research focuses on multilingual natural language processing with special attention to under-resourced languages.

Current Students

Master's Research - McGill University
Master's Research - McGill University
Research Intern - McGill University
Collaborating researcher - McGill University
Postdoctorate - McGill University
Research Intern - McGill University
PhD - McGill University
Research Intern - McGill University
PhD - McGill University
PhD - McGill University
Research Intern - McGill University
Master's Research - McGill University
Research Intern - McGill University
Professional Master's - Université de Montréal
Research Intern - McGill University
Master's Research - McGill University

Publications

XTREME-UP: A User-Centric Scarce-Data Benchmark for Under-Represented Languages
Sebastian Ruder
Jonathan H. Clark
Alexander Gutkin
Mihir Kale
Min Ma
Massimo Nicosia
Shruti Rijhwani
Parker Riley
Jean Michel Amath Sarr
Xinyi Wang
John Frederick Wieting
Nitish Gupta
Anna Katanova
Christo Kirov
Dana L Dickinson
Brian Roark
Bidisha Samanta
Connie Tao
Vera Axelrod … (see 7 more)
Isaac Rayburn Caswell
Colin Cherry
Dan Garrette
Reeve Ingle
Melvin Johnson
Dmitry Panteleev
Partha Talukdar
Data scarcity is a crucial issue for the development of highly multilingual NLP systems. Yet for many under-represented languages (ULs) -- l… (see more)anguages for which NLP re-search is particularly far behind in meeting user needs -- it is feasible to annotate small amounts of data. Motivated by this, we propose XTREME-UP, a benchmark defined by: its focus on the scarce-data scenario rather than zero-shot; its focus on user-centric tasks -- tasks with broad adoption by speakers of high-resource languages; and its focus on under-represented languages where this scarce-data scenario tends to be most realistic. XTREME-UP evaluates the capabilities of language models across 88 under-represented languages over 9 key user-centric technologies including ASR, OCR, MT, and information access tasks that are of general utility. We create new datasets for OCR, autocomplete, semantic parsing, and transliteration, and build on and refine existing datasets for other tasks. XTREME-UP provides methodology for evaluating many modeling scenarios including text-only, multi-modal (vision, audio, and text),supervised parameter tuning, and in-context learning. We evaluate commonly used models on the benchmark. We release all code and scripts to train and evaluate models
Better Quality Pre-training Data and T5 Models for African Languages
Akintunde Oladipo
Mofetoluwa Adeyemi
Orevaoghene Ahia
Abraham Toluwase Owodunni
Odunayo Ogundepo
Jimmy Lin
In this study, we highlight the importance of enhancing the quality of pretraining data in multilingual language models. Existing web crawl… (see more)s have demonstrated quality issues, particularly in the context of low-resource languages. Consequently, we introduce a new multilingual pretraining corpus for
Improving Language Plasticity via Pretraining with Active Forgetting
Yihong Chen
Kelly Marchisio
Roberta Raileanu
Pontus Stenetorp
Sebastian Riedel
Mikel Artetxe
Pretrained language models (PLMs) are today the primary model for natural language processing. Despite their impressive downstream performan… (see more)ce, it can be difficult to apply PLMs to new languages, a barrier to making their capabilities universally accessible. While prior work has shown it possible to address this issue by learning a new embedding layer for the new language, doing so is both data and compute inefficient. We propose to use an active forgetting mechanism during pretraining, as a simple way of creating PLMs that can quickly adapt to new languages. Concretely, by resetting the embedding layer every K updates during pretraining, we encourage the PLM to improve its ability of learning new embeddings within limited number of updates, similar to a meta-learning effect. Experiments with RoBERTa show that models pretrained with our forgetting mechanism not only demonstrate faster convergence during language adaptation, but also outperform standard ones in a low-data regime, particularly for languages that are distant from English. Code will be available at https://github.com/facebookresearch/language-model-plasticity.
YORC: Yoruba Reading Comprehension dataset
Aremu Anuoluwapo
Jesujoba Oluwadara Alabi
In this paper, we create YORC: a new multi-choice Yoruba Reading Comprehension dataset that is based on Yoruba high-school reading comprehen… (see more)sion examination. We provide baseline results by performing cross-lingual transfer using existing English RACE dataset based on a pre-trained encoder-only model. Additionally, we provide results by prompting large language models (LLMs) like GPT-4.
Consultative engagement of stakeholders toward a roadmap for African language technologies
Kathleen Siminyu
Jade Abbott
Kọ́lá Túbọ̀sún
Aremu Anuoluwapo
Blessing Kudzaishe Sibanda
Kofi Yeboah
Masabata Mokgesi-Selinga
Frederick R. Apina
Angela Thandizwe Mthembu
Arshath Ramkilowan
Babatunde Oladimeji
NollySenti: Leveraging Transfer Learning and Machine Translation for Nigerian Movie Sentiment Classification
Iyanuoluwa Shode
Jing Peng
Anna Feldman
Africa has over 2000 indigenous languages but they are under-represented in NLP research due to lack of datasets. In recent years, there hav… (see more)e been progress in developing labelled corpora for African languages. However, they are often available in a single domain and may not generalize to other domains. In this paper, we focus on the task of sentiment classification for cross-domain adaptation. We create a new dataset, Nollywood movie reviews for five languages widely spoken in Nigeria (English, Hausa, Igbo, Nigerian Pidgin, and Yoruba). We provide an extensive empirical evaluation using classical machine learning methods and pre-trained language models. By leveraging transfer learning, we compare the performance of cross-domain adaptation from Twitter domain, and cross-lingual adaptation from English language. Our evaluation shows that transfer from English in the same target domain leads to more than 5% improvement in accuracy compared to transfer from Twitter in the same language. To further mitigate the domain difference, we leverage machine translation from English to other Nigerian languages, which leads to a further improvement of 7% over cross-lingual evaluation. While machine translation to low-resource languages are often of low quality, our analysis shows that sentiment related words are often preserved.
SemEval-2023 Task 12: Sentiment Analysis for African Languages (AfriSenti-SemEval)
Shamsuddeen Hassan Muhammad
Idris Abdulmumin
Seid Muhie Yimam
Ibrahim Ahmad
Nedjma OUSIDHOUM
Abinew Ayele
Saif Mohammad
Meriem Beloucif
Varepsilon kú mask: Integrating Yorùbá cultural greetings into machine translation
Idris Akinade
Jesujoba Oluwadara Alabi
Clement Odoje
Dietrich Klakow
This paper investigates the performance of massively multilingual neural machine translation (NMT) systems in translating Yorùbá greetings… (see more) (kú mask), which are a big part of Yorùbá language and culture, into English. To evaluate these models, we present IkiniYorùbá, a Yorùbá-English translation dataset containing some Yorùbá greetings, and sample use cases. We analysed the performance of different multilingual NMT systems including Google and NLLB and show that these models struggle to accurately translate Yorùbá greetings into English. In addition, we trained a Yorùbá-English model by fine-tuning an existing NMT model on the training split of IkiniYorùbá and this achieved better performance when compared to the pre-trained multilingual NMT models, although they were trained on a large volume of data.
AfriMTE and AfriCOMET: Empowering COMET to Embrace Under-resourced African Languages
Jiayi Wang
Sweta Agrawal
Ricardo Rei
Eleftheria Briakou
Marine Carpuat
Marek Masiak
Xuanli He
Sofia Bourhim
Andiswa Bukula
Muhidin A. Mohamed
Temitayo Olatoye
Hamam Mokayed
Christine Mwase
Wangui Kimotho
Foutse Yuehgoh
Aremu Anuoluwapo
Shamsuddeen Hassan Muhammad
Salomey Osei … (see 37 more)
Abdul-Hakeem Omotayo
Chiamaka Ijeoma Chukwuneke
Perez Ogayo
Oumaima Hourrane
Salma El Anigri
Lolwethu Ndolela
Thabiso Mangwana
Shafie Abdi Mohamed
Ayinde Hassan
Oluwabusayo Olufunke Awoyomi
Lama Alkhaled
sana Sabah al-azzawi
Naome Etori
Millicent Ochieng
Clemencia Siro
Samuel Njoroge
Eric Muchiri
Wangari Kimotho
Lyse Naomi Wamba
Daud Abolade
Simbiat Ajao
Tosin Adewumi
Iyanuoluwa Shode
Ricky Macharm
Ruqayya Nasir Iro
Saheed Salahudeen Abdullahi
Stephen Moore
Bernard Opoku
Zainab Akinjobi
Abeeb Afolabi
Nnaemeka Casmir Obiefuna
Onyekachi Ogbu
Sam Brian
Verrah Akinyi Otiende
CHINEDU EMMANUEL MBONU
Toadoum Sari Sakayo
Pontus Stenetorp
Despite the progress we have recorded in scaling multilingual machine translation (MT) models and evaluation data to several under-resourced… (see more) African languages, it is difficult to measure accurately the progress we have made on these languages because evaluation is often performed on n -gram matching metrics like BLEU that often have worse correlation with human judgments. Embedding-based metrics such as COMET correlate better; however, lack of evaluation data with human ratings for under-resourced languages, complexity of annotation guidelines like Multidimensional Quality Metrics (MQM), and limited language coverage of multilingual encoders have hampered their applicability to African languages. In this paper, we address these challenges by creating high-quality human evaluation data with a simplified MQM guideline for error-span annotation and direct assessment (DA) scoring for 13 typologi-cally diverse African languages. Furthermore, we develop A FRI COMET—a COMET evaluation metric for African languages by leveraging DA training data from high-resource languages and African-centric multilingual encoder (AfroXLM-Roberta) to create the state-of-the-art evaluation metric for African languages MT with respect to Spearman-rank correlation with human judgments ( +0 . 406 ).
AfriSenti: A Twitter Sentiment Analysis Benchmark for African Languages
Shamsuddeen Hassan Muhammad
Idris Abdulmumin
Abinew Ayele
Nedjma OUSIDHOUM
Seid Muhie Yimam
Ibrahim Ahmad
Meriem Beloucif
Saif Mohammad
Sebastian Ruder
Oumaima Hourrane
Alipio Jorge
Pavel Brazdil
Felermino Ali
Davis David
Salomey Osei
Bello Shehu-Bello
Falalu Lawan
Tajuddeen Gwadabe
Samuel Rutunda … (see 7 more)
Tadesse Belay
Wendimu Baye Messelle
Hailu Balcha
Sisay Adugna Chala
Hagos Gebremichael
Bernard Opoku
Stephen Arthur
Findings of the 1st Shared Task on Multi-lingual Multi-task Information Retrieval at MRL 2023
Francesco Tinner
Mammad Hajili
Omer Goldman
Muhammad Farid Adilazuarda
Muhammad Dehan Al Kautsar
Aziza Mirsaidova
Müge Kural
Dylan Massey
Chiamaka Ijeoma Chukwuneke
CHINEDU EMMANUEL MBONU
Damilola Oluwaseun Oloyede
Kayode Olaleye
Jonathan Atala
Benjamin A. Ajibade
Saksham Bassi
Najoung Kim
Duygu Ataman
Large language models (LLMs) excel in language understanding and generation, especially in English which has ample public benchmarks for var… (see more)ious natural language processing (NLP) tasks. Nevertheless, their reliability across different languages and domains remains uncertain. Our new shared task introduces a novel benchmark to assess the ability of multilingual LLMs to comprehend and produce language under sparse settings, particularly in scenarios with under-resourced languages, with an emphasis on the ability to capture logical, factual, or causal relationships within lengthy text contexts. The shared task consists of two sub-tasks crucial to information retrieval: Named Entity Recognition (NER) and Reading Comprehension (RC), in 7 data-scarce languages: Azerbaijani, Igbo, Indonesian, Swiss German, Turkish, Uzbek and Yorùbá, which previously lacked annotated resources in information retrieval tasks. Our evaluation of leading LLMs reveals that, despite their competitive performance, they still have notable weaknesses such as producing output in the non-target language or providing counterfactual information that cannot be inferred from the context. As more advanced models emerge, the benchmark will remain essential for supporting fairness and applicability in information retrieval systems.
MasakhaNEWS: News Topic Classification for African languages
Marek Masiak
Israel Abebe Azime
Jesujoba Alabi
Atnafu Lambebo Tonja
Christine Mwase
Odunayo Ogundepo
Bonaventure F. P. Dossou
Akintunde Oladipo
Doreen Nixdorf
Chris Chinenye Emezue
sana al-azzawi
Blessing Sibanda
Davis David
Lolwethu Ndolela
Jonathan Mukiibi
Tunde Ajayi
Tatiana Moteu
Brian Odhiambo
Abraham Owodunni … (see 42 more)
Nnaemeka Obiefuna
Shamsuddeen Hassan Muhammad
Saheed Abdullahi Salahudeen
Mesay Gemeda Yigezu
Tajuddeen Gwadabe
Idris Abdulmumin
Mahlet Taye
Oluwabusayo Awoyomi
Iyanuoluwa Shode
Tolulope Adelani
Habiba Abdulganiyu
Abdul-Hakeem Omotayo
Adetola Adeeko
Adetola Adeeko
Anuoluwapo Aremu
Olanrewaju Samuel
Clemencia Siro
Wangari Kimotho
Onyekachi Ogbu
Chinedu Mbonu
Chiamaka Chukwuneke
Samuel Fanijo
Oyinkansola Awosan
Tadesse Kebede
Toadoum Sari Sakayo
Pamela Nyatsine
Freedmore Sidume
Oreen Yousuf
Mardiyyah Oduwole
Ussen Kimanuka
Kanda Patrick Tshinu
Thina Diko
Siyanda Nxakama
Abdulmejid Johar
Sinodos Nigusse
Muhidin Mohamed
Shafie Mohamed
Fuad Mire Hassan
Moges Ahmed Mehamed
Evrard Ngabire
Pontus Stenetorp
African languages are severely under-represented in NLP research due to lack of datasets covering several NLP tasks. While there are individ… (see more)ual language specific datasets that are being expanded to different tasks, only a handful of NLP tasks (e.g. named entity recognition and machine translation) have standardized benchmark datasets covering several geographical and typologically-diverse African languages. In this paper, we develop MasakhaNEWS -- a new benchmark dataset for news topic classification covering 16 languages widely spoken in Africa. We provide an evaluation of baseline models by training classical machine learning models and fine-tuning several language models. Furthermore, we explore several alternatives to full fine-tuning of language models that are better suited for zero-shot and few-shot learning such as cross-lingual parameter-efficient fine-tuning (like MAD-X), pattern exploiting training (PET), prompting language models (like ChatGPT), and prompt-free sentence transformer fine-tuning (SetFit and Cohere Embedding API). Our evaluation in zero-shot setting shows the potential of prompting ChatGPT for news topic classification in low-resource African languages, achieving an average performance of 70 F1 points without leveraging additional supervision like MAD-X. In few-shot setting, we show that with as little as 10 examples per label, we achieved more than 90\% (i.e. 86.0 F1 points) of the performance of full supervised training (92.6 F1 points) leveraging the PET approach.