Portrait de David Ifeoluwa Adelani

David Ifeoluwa Adelani

Membre académique principal
Chaire en IA Canada-CIFAR
McGill University

Biographie

David Adelani est professeur adjoint à venir en science informatique et lutte contre les inégalités à l’Université McGill, et membre académique principal à Mila – Institut québécois d'intelligence artificielle. Ses recherches se concentrent sur le traitement multilingue du langage naturel, avec un accent particulier sur les langues sous-dotées en ressources.

Publications

5th Workshop on African Natural Language Processing (AfricaNLP 2024)
Happy Buzaaba
Bonaventure F. P. Dossou
Hady Elsahar
Constantine Lignos
Atnafu Lambebo Tonja
Salomey Osei
Aremu Anuoluwapo
Clemencia Siro
Shamsuddeen Hassan Muhammad
Tajuddeen Gwadabe
Perez Ogayo
Israel Abebe Azime
Kayode Olaleye
Over 1 billion people live in Africa, and its residents speak more than 2,000 languages. But those languages are among the least represented… (voir plus) in NLP research, and work on African languages is often sidelined at major venues. Over the past few years, a vibrant, collaborative community of researchers has formed around a sustained focus on NLP for the benefit of the African continent: national, regional, continental and even global collaborative efforts focused on African languages, African corpora, and tasks with importance in the African context. The AfricaNLP workshops have been a central venue in organizing, sustaining, and growing this focus, and we propose to continue this tradition with an AfricaNLP 2024 workshop in Vienna. Starting in 2020, the AfricaNLP workshop has become a core event for the African NLP community and has drawn global attendance and interest. Many of the participants are active in the Masakhane grassroots NLP community, allowing the community to convene, showcase and share experiences with each other. Large scale collaborative works have been enabled by participants who joined from the AfricaNLP workshop such as MasakhaNER (61 authors), Quality assessment of Multilingual Datasets (51 authors), Corpora Building for Twi (25 authors), NLP for Ghanaian Languages (25 Authors). Many first-time authors, through the mentorship program, found collaborators and published their first paper. Those mentorship relationships built trust and coherence within the community that continues to this day. We aim to continue this. In the contemporary AI landscape, generative AI has rapidly expanded with significant input and innovation from the global research community. This technology enables machines to generate novel content, showcases potential across a multitude of sectors. However, underrepresentation of African languages persists within this growth. Recognizing the urgency to address this gap has inspired the theme for the 2024 workshop: Adaptation of Generative AI for African languages which aspires to congregate experts, linguists, and AI enthusiasts to delve into solutions, collaborations, and strategies to amplify the presence of African languages in generative AI models.
AfriHG: News Headline Generation for African Languages
Toyib Ogunremi
Serah sessi Akojenu
Anthony Soronnadi
Olubayo Adekanmbi
ANGOFA: Leveraging OFA Embedding Initialization and Synthetic Data for Angolan Language Model
Osvaldo Luamba Quinjica
In recent years, the development of pre-trained language models (PLMs) has gained momentum, showcasing their capacity to transcend linguisti… (voir plus)c barriers and facilitate knowledge transfer across diverse languages. However, this progress has predominantly bypassed the inclusion of very-low resource languages, creating a notable void in the multilingual landscape. This paper addresses this gap by introducing four tailored PLMs specifically finetuned for Angolan languages, employing a Multilingual Adaptive Fine-tuning (MAFT) approach. In this paper, we survey the role of informed embedding initialization and synthetic data in enhancing the performance of MAFT models in downstream tasks. We improve baseline over SOTA AfroXLMR-base (developed through MAFT) and OFA (an effective embedding initialization) by 12.3 and 3.8 points respectively.
EkoHate: Offensive and Hate Speech Detection for Code-switched Political discussions on Nigerian Twitter
Comfort Eseohen Ilevbare
Jesujoba Oluwadara Alabi
Bakare Firdous Damilola
Abiola Oluwatoyin Bunmi
ADEYEMO Oluwaseyi Adesina
Nigerians have a notable online presence and actively discuss political and topical matters. This was particularly evident throughout the 20… (voir plus)23 general election, where Twitter was utilized for campaigning, fact-checking and verification, and even positive and negative discourse. However, little or none has been done in the detection of abusive language and hate speech in Nigeria. In this paper, we curate code-switched Twitter data directed at three musketeers of the governorship election on the most populous and economically vibrant state in Nigeria; Lagos state, with the view to detect offensive and hate speech on political discussion. We develop EkoHate---an abusive language and hate speech dataset for political discussions between the three candidates and their followers using a binary (normal vs offensive) and fine-grained four-label annotation scheme. We analysed our dataset and provide an empirical evaluation of state-of-the-art methods across both supervised and cross-lingual transfer learning settings. In the supervised setting, our evaluation results in both binary and four-label annotation schemes show that we can achieve 95.1 and 70.3 F1 points respectively. Furthermore, we show that our dataset adequately transfers very well to two publicly available offensive datasets (OLID and HateUS2020) with at least 62.7 F1 points.
Enhancing Transformer Models for Igbo Language Processing: A Critical Comparative Study
Anthony Soronnadi
Olubayo Adekanmbi
Chinazo Anebelundu
NaijaRC: A Multi-choice Reading Comprehension Dataset for Nigerian Languages
Aremu Anuoluwapo
Jesujoba Oluwadara Alabi
Daud Abolade
Nkechinyere Faith Aguobi
Shamsuddeen Hassan Muhammad
In this paper, we create NaijaRC— a new multi-choice Nigerian Reading Comprehension dataset that is based on high-school RC examination fo… (voir plus)r three Nigerian national languages: Hausa (hau), Igbo (ibo), and \yoruba (yor). We provide baseline results by performing cross-lingual transfer using the Belebele training data which is majorly from RACE {RACE is based on English exams for middle and high school Chinese students, very similar to our dataset.} dataset based on several pre-trained encoder-only models. Additionally, we provide results by prompting large language models (LLMs) like GPT-4.
YAD: Leveraging T5 for improved automatic diacritization of Yorùbá text
Akindele Michael Olawole
Jesujoba Oluwadara Alabi
Aderonke Busayo Sakpere
In this work we present Yorùbá automatic diacritization (YAD) benchmark dataset for evaluating Yorùbá diacritization systems. In additio… (voir plus)n, we pre-train text-to-text transformer, T5 model for Yorùbá and showed that this model outperform several multilingually trained T5 models. Lastly, we showed that more data and bigger models are better at diacritization for Yorùbá
SIB-200: A Simple, Inclusive, and Big Evaluation Dataset for Topic Classification in 200+ Languages and Dialects
Hannah Liu
Xiaoyu Shen
Nikita Vassilyev
Jesujoba Oluwadara Alabi
Yanke Mao
Haonan Gao
Annie En-Shiun Lee
Cross-lingual Open-Retrieval Question Answering for African Languages
Odunayo Ogundepo
Tajuddeen Gwadabe
Clara E. Rivera
Jonathan H. Clark
Sebastian Ruder
Bonaventure F. P. Dossou
Abdou Aziz DIOP
Claytone Sikasote
Gilles Q. Hacheme
Happy Buzaaba
Ignatius Majesty Ezeani
Rooweither Mabuya
Salomey Osei
Chris Emezue
Albert Njoroge Kahira
Shamsuddeen Hassan Muhammad
Akintunde Oladipo
Abraham Toluwase Owodunni
Atnafu Lambebo Tonja … (voir 24 de plus)
Iyanuoluwa Shode
Akari Asai
Aremu Anuoluwapo
Ayodele Awokoya
Bernard Opoku
Chiamaka Ijeoma Chukwuneke
Christine Mwase
Clemencia Siro
Stephen Arthur
Tunde Oluwaseyi Ajayi
V. Otiende
Andre Niyongabo Rubungo
B. Sinkala
Daniel A. Ajisafe
Emeka Onwuegbuzia
Falalu Lawan
Ibrahim Ahmad
Jesujoba Alabi
CHINEDU EMMANUEL MBONU
Mofetoluwa Adeyemi
Mofya Phiri
Orevaoghene Ahia
Ruqayya Nasir Iro
Sonia Adhiambo
Cross-lingual Open-Retrieval Question Answering for African Languages
Odunayo Ogundepo
Tajuddeen Gwadabe
Clara E. Rivera
Jonathan H. Clark
Sebastian Ruder
Bonaventure F. P. Dossou
Abdou Aziz DIOP
Claytone Sikasote
Gilles HACHEME
Happy Buzaaba
Ignatius Ezeani
Rooweither Mabuya
Salomey Osei
Chris Emezue
Albert Kahira
Shamsuddeen Hassan Muhammad
Akintunde Oladipo
Abraham Toluwase Owodunni
Atnafu Lambebo Tonja … (voir 24 de plus)
Iyanuoluwa Shode
Akari Asai
Aremu Anuoluwapo
Ayodele Awokoya
Bernard Opoku
Chiamaka Ijeoma Chukwuneke
Christine Mwase
Clemencia Siro
Stephen Arthur
Tunde Oluwaseyi Ajayi
Verrah Akinyi Otiende
Andre Niyongabo Rubungo
Boyd Sinkala
Daniel Ajisafe
Emeka Felix Onwuegbuzia
Falalu Lawan
Ibrahim Ahmad
Jesujoba Oluwadara Alabi
CHINEDU EMMANUEL MBONU
Mofetoluwa Adeyemi
Mofya Phiri
Orevaoghene Ahia
Ruqayya Nasir Iro
Sonia Adhiambo
How good are Large Language Models on African Languages?
Jessica Ojo
Kelechi Ogueji
Pontus Stenetorp
Improving Language Plasticity via Pretraining with Active Forgetting
Yihong Chen
Kelly Marchisio
Roberta Raileanu
Pontus Stenetorp
Sebastian Riedel
Mikel Artetxe
Pretrained language models (PLMs) are today the primary model for natural language processing. Despite their impressive downstream performan… (voir plus)ce, it can be difficult to apply PLMs to new languages, a barrier to making their capabilities universally accessible. While prior work has shown it possible to address this issue by learning a new embedding layer for the new language, doing so is both data and compute inefficient. We propose to use an active forgetting mechanism during pretraining, as a simple way of creating PLMs that can quickly adapt to new languages. Concretely, by resetting the embedding layer every K updates during pretraining, we encourage the PLM to improve its ability of learning new embeddings within limited number of updates, similar to a meta-learning effect. Experiments with RoBERTa show that models pretrained with our forgetting mechanism not only demonstrate faster convergence during language adaptation, but also outperform standard ones in a low-data regime, particularly for languages that are distant from English. Code will be available at https://github.com/facebookresearch/language-model-plasticity.