Portrait de David Ifeoluwa Adelani

David Ifeoluwa Adelani

Membre académique principal
Chaire en IA Canada-CIFAR
McGill University
Sujets de recherche
Apprentissage de représentations
Apprentissage profond
Traitement de la parole
Traitement du langage naturel

Biographie

David Adelani est professeur adjoint en science informatique et lutte contre les inégalités à l’Université McGill, et membre académique principal à Mila – Institut québécois d'intelligence artificielle. Ses recherches se concentrent sur le traitement multilingue du langage naturel, avec un accent particulier sur les langues sous-dotées en ressources.

Étudiants actuels

Maîtrise recherche - McGill
Stagiaire de recherche - McGill
Stagiaire de recherche - McGill
Stagiaire de recherche - McGill
Stagiaire de recherche - McGill
Doctorat - McGill
Stagiaire de recherche - McGill
Doctorat - McGill
Doctorat - McGill
Maîtrise recherche - McGill
Collaborateur·rice alumni - McGill
Stagiaire de recherche - McGill
Maîtrise professionnelle - UdeM
Stagiaire de recherche - McGill
Maîtrise recherche - McGill

Publications

AfriHG: News Headline Generation for African Languages
Toyib Ogunremi
Serah sessi Akojenu
Anthony Soronnadi
Olubayo Adekanmbi
ANGOFA: Leveraging OFA Embedding Initialization and Synthetic Data for Angolan Language Model
Osvaldo Luamba Quinjica
In recent years, the development of pre-trained language models (PLMs) has gained momentum, showcasing their capacity to transcend linguisti… (voir plus)c barriers and facilitate knowledge transfer across diverse languages. However, this progress has predominantly bypassed the inclusion of very-low resource languages, creating a notable void in the multilingual landscape. This paper addresses this gap by introducing four tailored PLMs specifically finetuned for Angolan languages, employing a Multilingual Adaptive Fine-tuning (MAFT) approach. In this paper, we survey the role of informed embedding initialization and synthetic data in enhancing the performance of MAFT models in downstream tasks. We improve baseline over SOTA AfroXLMR-base (developed through MAFT) and OFA (an effective embedding initialization) by 12.3 and 3.8 points respectively.
EkoHate: Offensive and Hate Speech Detection for Code-switched Political discussions on Nigerian Twitter
Comfort Eseohen Ilevbare
Jesujoba Oluwadara Alabi
Bakare Firdous Damilola
Abiola Oluwatoyin Bunmi
ADEYEMO Oluwaseyi Adesina
Nigerians have a notable online presence and actively discuss political and topical matters. This was particularly evident throughout the 20… (voir plus)23 general election, where Twitter was utilized for campaigning, fact-checking and verification, and even positive and negative discourse. However, little or none has been done in the detection of abusive language and hate speech in Nigeria. In this paper, we curate code-switched Twitter data directed at three musketeers of the governorship election on the most populous and economically vibrant state in Nigeria; Lagos state, with the view to detect offensive and hate speech on political discussion. We develop EkoHate---an abusive language and hate speech dataset for political discussions between the three candidates and their followers using a binary (normal vs offensive) and fine-grained four-label annotation scheme. We analysed our dataset and provide an empirical evaluation of state-of-the-art methods across both supervised and cross-lingual transfer learning settings. In the supervised setting, our evaluation results in both binary and four-label annotation schemes show that we can achieve 95.1 and 70.3 F1 points respectively. Furthermore, we show that our dataset adequately transfers very well to two publicly available offensive datasets (OLID and HateUS2020) with at least 62.7 F1 points.
Enhancing Transformer Models for Igbo Language Processing: A Critical Comparative Study
Anthony Soronnadi
Olubayo Adekanmbi
Chinazo Anebelundu
NaijaRC: A Multi-choice Reading Comprehension Dataset for Nigerian Languages
Aremu Anuoluwapo
Jesujoba Oluwadara Alabi
Daud Abolade
Nkechinyere Faith Aguobi
Shamsuddeen Hassan Muhammad
In this paper, we create NaijaRC— a new multi-choice Nigerian Reading Comprehension dataset that is based on high-school RC examination fo… (voir plus)r three Nigerian national languages: Hausa (hau), Igbo (ibo), and \yoruba (yor). We provide baseline results by performing cross-lingual transfer using the Belebele training data which is majorly from RACE {RACE is based on English exams for middle and high school Chinese students, very similar to our dataset.} dataset based on several pre-trained encoder-only models. Additionally, we provide results by prompting large language models (LLMs) like GPT-4.
YAD: Leveraging T5 for improved automatic diacritization of Yorùbá text
Akindele Michael Olawole
Jesujoba Oluwadara Alabi
Aderonke Busayo Sakpere
In this work we present Yorùbá automatic diacritization (YAD) benchmark dataset for evaluating Yorùbá diacritization systems. In additio… (voir plus)n, we pre-train text-to-text transformer, T5 model for Yorùbá and showed that this model outperform several multilingually trained T5 models. Lastly, we showed that more data and bigger models are better at diacritization for Yorùbá
Are LLMs Breaking MT Metrics? Results of the WMT24 Metrics Shared Task
Markus Freitag
Nitika Mathur
Daniel Deutsch
Chi-kiu Lo
Eleftherios Avramidis
Ricardo Rei
Brian Thompson
Frédéric Blain
Tom Kocmi
Jiayi Wang
Marianna Buchicchio
Chrysoula Zerva
Are LLMs Breaking MT Metrics? Results of the WMT24 Metrics Shared Task
Markus Freitag
Nitika Mathur
Daniel Deutsch
Chi-kiu Lo
Eleftherios Avramidis
Ricardo Rei
Brian Thompson
Frédéric Blain
Tom Kocmi
Jiayi Wang
Marianna Buchicchio
Chrysoula Zerva
Evaluating WMT 2024 Metrics Shared Task Submissions on AfriMTE (the African Challenge Set)
Jiayi Wang
Pontus Stenetorp
Evaluating WMT 2024 Metrics Shared Task Submissions on AfriMTE (the African Challenge Set)
Jiayi Wang
Pontus Stenetorp
Findings of the 2nd Shared Task on Multi-lingual Multi-task Information Retrieval at MRL 2024
Francesco Tinner
Raghav Mantri
Mammad Hajili
Chiamaka Ijeoma Chukwuneke
Dylan Massey
Benjamin A. Ajibade
Bilge Kocak
Abolade Dawud
Jonathan Atala
Hale Sirin
Kayode Olaleye
Anar Rzayev
Duygu Ataman
Findings of the 2nd Shared Task on Multi-lingual Multi-task Information Retrieval at MRL 2024
Francesco Tinner
Raghav Mantri
Mammad Hajili
Chiamaka Ijeoma Chukwuneke
Dylan Massey
Benjamin A. Ajibade
Bilge Kocak
Abolade Dawud
Jonathan Atala
Hale Sirin
Kayode Olaleye
Anar Rzayev
Duygu Ataman
Large language models (LLMs) demonstrate exceptional proficiency in both the comprehension and generation of textual data, particularly in E… (voir plus)nglish, a language for which extensive public benchmarks have been established across a wide range of natural language processing (NLP) tasks. Nonetheless, their performance in multilingual contexts and specialized domains remains less rigorously validated, raising questions about their reliability and generalizability across linguistically diverse and domain-specific settings. The second edition of the Shared Task on Multilingual Multitask Information Retrieval aims to provide a comprehensive and inclusive multilingual evaluation benchmark which aids assessing the ability of multilingual LLMs to capture logical, factual, or causal relationships within lengthy text contexts and generate language under sparse settings, particularly in scenarios with under-resourced languages. The shared task consists of two subtasks crucial to information retrieval: Named entity recognition (NER) and reading comprehension (RC), in 7 data-scarce languages: Azerbaijani, Swiss German, Turkish and , which previously lacked annotated resources in information retrieval tasks. This year specifally focus on the multiple-choice question answering evaluation setting which provides a more objective setting for comparing different methods across languages.