Portrait de David Ifeoluwa Adelani

David Ifeoluwa Adelani

Membre académique principal
Chaire en IA Canada-CIFAR
McGill University

Biographie

David Adelani est professeur adjoint en science informatique et lutte contre les inégalités à l’Université McGill, et membre académique principal à Mila – Institut québécois d'intelligence artificielle. Ses recherches se concentrent sur le traitement multilingue du langage naturel, avec un accent particulier sur les langues sous-dotées en ressources.

Publications

AfriMTE and AfriCOMET: Enhancing COMET to Embrace Under-resourced African Languages
Jiayi Wang
Sweta Agrawal
Marek Masiak
Ricardo Rei
Eleftheria Briakou
Marine Carpuat
Xuanli He
Sofia Bourhim
Andiswa Bukula
Muhidin A. Mohamed
Temitayo Olatoye
Tosin Adewumi
Hamam Mokayed
Christine Mwase
Wangui Kimotho
Foutse Yuehgoh
Aremu Anuoluwapo
Jessica Ojo
Shamsuddeen Hassan Muhammad … (voir 41 de plus)
Salomey Osei
Abdul-Hakeem Omotayo
Chiamaka Ijeoma Chukwuneke
Perez Ogayo
Oumaima Hourrane
Salma El Anigri
Lolwethu Ndolela
Thabiso Mangwana
Shafie Abdi Mohamed
Hassan Ayinde
Ayinde Hassan
Oluwabusayo Olufunke Awoyomi
Lama Alkhaled
sana Sabah al-azzawi
Naome Etori
Millicent Ochieng
Clemencia Siro
Njoroge Kiragu
Samuel Njoroge
Eric Muchiri
Wangari Kimotho
Lyse Naomi Wamba
Daud Abolade
Simbiat Ajao
Iyanuoluwa Shode
Ricky Macharm
Ruqayya Nasir Iro
Saheed Salahudeen Abdullahi
Stephen Moore
Bernard Opoku
Zainab Akinjobi
Abeeb Afolabi
Nnaemeka Casmir Obiefuna
Onyekachi Ogbu
Sam Brian
Sam Ochieng’
Verrah Akinyi Otiende
CHINEDU EMMANUEL MBONU
Toadoum Sari Sakayo
Yao Lu
Pontus Stenetorp
Despite the recent progress on scaling multilingual machine translation (MT) to several under-resourced African languages, accurately measur… (voir plus)ing this progress remains challenging, since evaluation is often performed on n-gram matching metrics such as BLEU, which typically show a weaker correlation with human judgments. Learned metrics such as COMET have higher correlation; however, the lack of evaluation data with human ratings for under-resourced languages, complexity of annotation guidelines like Multidimensional Quality Metrics (MQM), and limited language coverage of multilingual encoders have hampered their applicability to African languages. In this paper, we address these challenges by creating high-quality human evaluation data with simplified MQM guidelines for error detection and direct assessment (DA) scoring for 13 typologically diverse African languages. Furthermore, we develop AfriCOMET: COMET evaluation metrics for African languages by leveraging DA data from well-resourced languages and an African-centric multilingual encoder (AfroXLM-R) to create the state-of-the-art MT evaluation metrics for African languages with respect to Spearman-rank correlation with human judgments (0.441).
Comparing LLM prompting with Cross-lingual transfer performance on Indigenous and Low-resource Brazilian Languages
A. Seza Dougruoz
Andr'e Coneglian
Atul Kr. Ojha
Large Language Models are transforming NLP for a variety of tasks. However, how LLMs perform NLP tasks for low-resource languages (LRLs) is … (voir plus)less explored. In line with the goals of the AmericasNLP workshop, we focus on 12 LRLs from Brazil, 2 LRLs from Africa and 2 high-resource languages (HRLs) (e.g., English and Brazilian Portuguese). Our results indicate that the LLMs perform worse for the part of speech (POS) labeling of LRLs in comparison to HRLs. We explain the reasons behind this failure and provide an error analysis through examples observed in our data set.
EkoHate: Abusive Language and Hate Speech Detection for Code-switched Political Discussions on Nigerian Twitter
Comfort Eseohen Ilevbare
Jesujoba Oluwadara Alabi
Firdous Damilola Bakare
Oluwatoyin Bunmi Abiola
Oluwaseyi A. Adeyemo
Nigerians have a notable online presence and actively discuss political and topical matters. This was particularly evident throughout the 20… (voir plus)23 general election, where Twitter was used for campaigning, fact-checking and verification, and even positive and negative discourse. However, little or none has been done in the detection of abusive language and hate speech in Nigeria. In this paper, we curated code-switched Twitter data directed at three musketeers of the governorship election on the most populous and economically vibrant state in Nigeria; Lagos state, with the view to detect offensive speech in political discussions. We developed EkoHate -- an abusive language and hate speech dataset for political discussions between the three candidates and their followers using a binary (normal vs offensive) and fine-grained four-label annotation scheme. We analysed our dataset and provided an empirical evaluation of state-of-the-art methods across both supervised and cross-lingual transfer learning settings. In the supervised setting, our evaluation results in both binary and four-label annotation schemes show that we can achieve 95.1 and 70.3 F1 points respectively. Furthermore, we show that our dataset adequately transfers very well to three publicly available offensive datasets (OLID, HateUS2020, and FountaHate), generalizing to political discussions in other regions like the US.
5th Workshop on African Natural Language Processing (AfricaNLP 2024)
Happy Buzaaba
Bonaventure F. P. Dossou
Hady Elsahar
Constantine Lignos
Atnafu Lambebo Tonja
Salomey Osei
Aremu Anuoluwapo
Clemencia Siro
Shamsuddeen Hassan Muhammad
Tajuddeen Gwadabe
Perez Ogayo
Israel Abebe Azime
Kayode Olaleye
Over 1 billion people live in Africa, and its residents speak more than 2,000 languages. But those languages are among the least represented… (voir plus) in NLP research, and work on African languages is often sidelined at major venues. Over the past few years, a vibrant, collaborative community of researchers has formed around a sustained focus on NLP for the benefit of the African continent: national, regional, continental and even global collaborative efforts focused on African languages, African corpora, and tasks with importance in the African context. The AfricaNLP workshops have been a central venue in organizing, sustaining, and growing this focus, and we propose to continue this tradition with an AfricaNLP 2024 workshop in Vienna. Starting in 2020, the AfricaNLP workshop has become a core event for the African NLP community and has drawn global attendance and interest. Many of the participants are active in the Masakhane grassroots NLP community, allowing the community to convene, showcase and share experiences with each other. Large scale collaborative works have been enabled by participants who joined from the AfricaNLP workshop such as MasakhaNER (61 authors), Quality assessment of Multilingual Datasets (51 authors), Corpora Building for Twi (25 authors), NLP for Ghanaian Languages (25 Authors). Many first-time authors, through the mentorship program, found collaborators and published their first paper. Those mentorship relationships built trust and coherence within the community that continues to this day. We aim to continue this. In the contemporary AI landscape, generative AI has rapidly expanded with significant input and innovation from the global research community. This technology enables machines to generate novel content, showcases potential across a multitude of sectors. However, underrepresentation of African languages persists within this growth. Recognizing the urgency to address this gap has inspired the theme for the 2024 workshop: Adaptation of Generative AI for African languages which aspires to congregate experts, linguists, and AI enthusiasts to delve into solutions, collaborations, and strategies to amplify the presence of African languages in generative AI models.
AfriHG: News Headline Generation for African Languages
Toyib Ogunremi
Serah sessi Akojenu
Anthony Soronnadi
Olubayo Adekanmbi
ANGOFA: Leveraging OFA Embedding Initialization and Synthetic Data for Angolan Language Model
Osvaldo Luamba Quinjica
In recent years, the development of pre-trained language models (PLMs) has gained momentum, showcasing their capacity to transcend linguisti… (voir plus)c barriers and facilitate knowledge transfer across diverse languages. However, this progress has predominantly bypassed the inclusion of very-low resource languages, creating a notable void in the multilingual landscape. This paper addresses this gap by introducing four tailored PLMs specifically finetuned for Angolan languages, employing a Multilingual Adaptive Fine-tuning (MAFT) approach. In this paper, we survey the role of informed embedding initialization and synthetic data in enhancing the performance of MAFT models in downstream tasks. We improve baseline over SOTA AfroXLMR-base (developed through MAFT) and OFA (an effective embedding initialization) by 12.3 and 3.8 points respectively.
EkoHate: Offensive and Hate Speech Detection for Code-switched Political discussions on Nigerian Twitter
Comfort Eseohen Ilevbare
Jesujoba Oluwadara Alabi
Bakare Firdous Damilola
Abiola Oluwatoyin Bunmi
ADEYEMO Oluwaseyi Adesina
Nigerians have a notable online presence and actively discuss political and topical matters. This was particularly evident throughout the 20… (voir plus)23 general election, where Twitter was utilized for campaigning, fact-checking and verification, and even positive and negative discourse. However, little or none has been done in the detection of abusive language and hate speech in Nigeria. In this paper, we curate code-switched Twitter data directed at three musketeers of the governorship election on the most populous and economically vibrant state in Nigeria; Lagos state, with the view to detect offensive and hate speech on political discussion. We develop EkoHate---an abusive language and hate speech dataset for political discussions between the three candidates and their followers using a binary (normal vs offensive) and fine-grained four-label annotation scheme. We analysed our dataset and provide an empirical evaluation of state-of-the-art methods across both supervised and cross-lingual transfer learning settings. In the supervised setting, our evaluation results in both binary and four-label annotation schemes show that we can achieve 95.1 and 70.3 F1 points respectively. Furthermore, we show that our dataset adequately transfers very well to two publicly available offensive datasets (OLID and HateUS2020) with at least 62.7 F1 points.
Enhancing Transformer Models for Igbo Language Processing: A Critical Comparative Study
Anthony Soronnadi
Olubayo Adekanmbi
Chinazo Anebelundu
NaijaRC: A Multi-choice Reading Comprehension Dataset for Nigerian Languages
Aremu Anuoluwapo
Jesujoba Oluwadara Alabi
Daud Abolade
Nkechinyere Faith Aguobi
Shamsuddeen Hassan Muhammad
In this paper, we create NaijaRC— a new multi-choice Nigerian Reading Comprehension dataset that is based on high-school RC examination fo… (voir plus)r three Nigerian national languages: Hausa (hau), Igbo (ibo), and \yoruba (yor). We provide baseline results by performing cross-lingual transfer using the Belebele training data which is majorly from RACE {RACE is based on English exams for middle and high school Chinese students, very similar to our dataset.} dataset based on several pre-trained encoder-only models. Additionally, we provide results by prompting large language models (LLMs) like GPT-4.
YAD: Leveraging T5 for improved automatic diacritization of Yorùbá text
Akindele Michael Olawole
Jesujoba Oluwadara Alabi
Aderonke Busayo Sakpere
In this work we present Yorùbá automatic diacritization (YAD) benchmark dataset for evaluating Yorùbá diacritization systems. In additio… (voir plus)n, we pre-train text-to-text transformer, T5 model for Yorùbá and showed that this model outperform several multilingually trained T5 models. Lastly, we showed that more data and bigger models are better at diacritization for Yorùbá
ÌròyìnSpeech: A multi-purpose Yorùbá Speech Corpus
Tolúlope' Ògúnremí
Kọ́lá Túbọ̀sún
Aremu Anuoluwapo
Iroro Orife
SIB-200: A Simple, Inclusive, and Big Evaluation Dataset for Topic Classification in 200+ Languages and Dialects
Hannah Liu
Xiaoyu Shen
Nikita Vassilyev
Jesujoba Oluwadara Alabi
Yanke Mao
Haonan Gao
Annie En-Shiun Lee