Publications

List-GRAND: A Practical Way to Achieve Maximum Likelihood Decoding
Syed Mohsin Abbas
Marwan Jalaleddine
Warren J. Gross
Guessing random additive noise decoding (GRAND) is a recently proposed universal maximum likelihood (ML) decoder for short-length and high-r… (voir plus)ate linear block codes. Soft-GRAND (SGRAND) is a prominent soft-input GRAND variant, outperforming the other GRAND variants in decoding performance; nevertheless, SGRAND is not suitable for parallel hardware implementation. Ordered Reliability Bits-GRAND (ORBGRAND) is another soft-input GRAND variant that is suitable for parallel hardware implementation; however, it has lower decoding performance than SGRAND. In this article, we propose List-GRAND (LGRAND), a technique for enhancing the decoding performance of ORBGRAND to match the ML decoding performance of SGRAND. Numerical simulation results show that LGRAND enhances ORBGRAND’s decoding performance by 0.5–0.75 dB for channel codes of various classes at a target frame error rate (FER) of 10−7. For linear block codes of length 127/128 and different code rates, LGRAND’s VLSI implementation can achieve an average information throughput of 47.27–51.36 Gb/s. In comparison to ORBGRAND’s VLSI implementation, the proposed LGRAND hardware has a 4.84% area overhead.
A Literature Review on Detecting, Verifying, and Mitigating Online Misinformation
Arezo Bodaghi
Ketra A. Schmitt
Pierre Watine
Benjamin C. M. Fung
Social media use has transformed communication and made social interaction more accessible. Public microblogs allow people to share and acce… (voir plus)ss news through existing and social-media-created social connections and access to public news sources. These benefits also create opportunities for the spread of false information. False information online can mislead people, decrease the benefits derived from social media, and reduce trust in genuine news. We divide false information into two categories: unintentional false information, also known as misinformation; and intentionally false information, also known as disinformation and fake news. Given the increasing prevalence of misinformation, it is imperative to address its dissemination on social media platforms. This survey focuses on six key aspects related to misinformation: 1) clarify the definition of misinformation to differentiate it from intentional forms of false information; 2) categorize proposed approaches to manage misinformation into three types: detection, verification, and mitigation; 3) review the platforms and languages for which these techniques have been proposed and tested; 4) describe the specific features that are considered in each category; 5) compare public datasets created to address misinformation and categorize into prelabeled content-only datasets and those including users and their connections; and 6) survey fact-checking websites that can be used to verify the accuracy of information. This survey offers a comprehensive and unprecedented review of misinformation, integrating various methodological approaches, datasets, and content-, user-, and network-based approaches, which will undoubtedly benefit future research in this field.
Lower Bounds for Active Automata Learning.
Loes Kruger
Bharat Garhewal
François Coste
Frits W. Vaandrager
Faissal Ouardi
MAGNIFICo: Evaluating the In-Context Learning Ability of Large Language Models to Generalize to Novel Interpretations
Maintenance Cost of Software Ecosystem Updates
Solomon Berhe
M. Maynard
MARSY: a multitask deep-learning framework for prediction of drug combination synergy scores
Mohamed Reda El Khili
Combination therapies have emerged as a treatment strategy for cancers to reduce the probability of drug resistance and to improve outcomes.… (voir plus) Large databases curating the results of many drug screening studies on preclinical cancer cell lines have been developed, capturing the synergistic and antagonistic effects of combination of drugs in different cell lines. However, due to the high cost of drug screening experiments and the sheer size of possible drug combinations, these databases are quite sparse. This necessitates the development of transductive computational models to accurately impute these missing values. Here, we developed MARSY, a deep-learning multitask model that incorporates information on the gene expression profile of cancer cell lines, as well as the differential expression signature induced by each drug to predict drug-pair synergy scores. By utilizing two encoders to capture the interplay between the drug pairs, as well as the drug pairs and cell lines, and by adding auxiliary tasks in the predictor, MARSY learns latent embeddings that improve the prediction performance compared to state-of-the-art and traditional machine-learning models. Using MARSY, we then predicted the synergy scores of 133 722 new drug-pair cell line combinations, which we have made available to the community as part of this study. Moreover, we validated various insights obtained from these novel predictions using independent studies, confirming the ability of MARSY in making accurate novel predictions. An implementation of the algorithms in Python and cleaned input datasets are provided in https://github.com/Emad-COMBINE-lab/MARSY.
MasakhaNEWS: News Topic Classification for African languages
Marek Masiak
Israel Abebe Azime
Jesujoba Alabi
Atnafu Lambebo Tonja
Christine Mwase
Odunayo Ogundepo
Bonaventure F. P. Dossou
Akintunde Oladipo
Doreen Nixdorf
Chris Chinenye Emezue
sana al-azzawi
Blessing Sibanda
Davis David
Lolwethu Ndolela
Jonathan Mukiibi
Tunde Ajayi
Tatiana Moteu
Brian Odhiambo
Abraham Owodunni … (voir 42 de plus)
Nnaemeka Obiefuna
Shamsuddeen Hassan Muhammad
Saheed Abdullahi Salahudeen
Mesay Gemeda Yigezu
Tajuddeen Gwadabe
Idris Abdulmumin
Mahlet Taye
Oluwabusayo Awoyomi
Iyanuoluwa Shode
Tolulope Adelani
Habiba Abdulganiyu
Abdul-Hakeem Omotayo
Adetola Adeeko
Adetola Adeeko
Anuoluwapo Aremu
Olanrewaju Samuel
Clemencia Siro
Wangari Kimotho
Onyekachi Ogbu
Chinedu Mbonu
Chiamaka Chukwuneke
Samuel Fanijo
Oyinkansola Awosan
Tadesse Kebede
Toadoum Sari Sakayo
Pamela Nyatsine
Freedmore Sidume
Oreen Yousuf
Mardiyyah Oduwole
Ussen Kimanuka
Kanda Patrick Tshinu
Thina Diko
Siyanda Nxakama
Abdulmejid Johar
Sinodos Nigusse
Muhidin Mohamed
Shafie Mohamed
Fuad Mire Hassan
Moges Ahmed Mehamed
Evrard Ngabire
Pontus Stenetorp
African languages are severely under-represented in NLP research due to lack of datasets covering several NLP tasks. While there are individ… (voir plus)ual language specific datasets that are being expanded to different tasks, only a handful of NLP tasks (e.g. named entity recognition and machine translation) have standardized benchmark datasets covering several geographical and typologically-diverse African languages. In this paper, we develop MasakhaNEWS -- a new benchmark dataset for news topic classification covering 16 languages widely spoken in Africa. We provide an evaluation of baseline models by training classical machine learning models and fine-tuning several language models. Furthermore, we explore several alternatives to full fine-tuning of language models that are better suited for zero-shot and few-shot learning such as cross-lingual parameter-efficient fine-tuning (like MAD-X), pattern exploiting training (PET), prompting language models (like ChatGPT), and prompt-free sentence transformer fine-tuning (SetFit and Cohere Embedding API). Our evaluation in zero-shot setting shows the potential of prompting ChatGPT for news topic classification in low-resource African languages, achieving an average performance of 70 F1 points without leveraging additional supervision like MAD-X. In few-shot setting, we show that with as little as 10 examples per label, we achieved more than 90\% (i.e. 86.0 F1 points) of the performance of full supervised training (92.6 F1 points) leveraging the PET approach.
MasakhaPOS: Part-of-Speech Tagging for Typologically Diverse African Languages
Cheikh M. Bamba Dione
Peter Nabende
Jesujoba O. Alabi
Thapelo Sindane
Happy Buzaaba
Shamsuddeen Hassan Muhammad
Chris Chinenye Emezue
Perez Ogayo
Anuoluwapo Aremu
Catherine Gitau
Derguene Mbaye
Jonathan Mukiibi
Blessing Sibanda
Bonaventure F. P. Dossou
Andiswa Bukula
Rooweither Mabuya
Allahsera Auguste Tapo
Edwin Munkoh-Buabeng
Victoire Memdjokam Koagne … (voir 24 de plus)
Fatoumata Ouoba Kabore
Amelia Taylor
Godson Kalipe
Tebogo Macucwa
Vukosi Marivate
Tajuddeen Gwadabe
Elvis Tchiaze Mboning
Ikechukwu Onyenwe
Gratien Atindogbe
Tolulope Anu Adelani
Idris Akinade
Olanrewaju Samuel
Marien Nahimana
Théogène Musabeyezu
Emile Niyomutabazi
Ester Chimhenga
Kudzai Gotosa
Patrick Mizha
Apelete Agbolo
Seydou Traore
Chinedu Uchechukwu
Aliyu Yusuf
Muhammad Abdullahi
Dietrich Klakow
In this paper, we present MasakhaPOS, the largest part-of-speech (POS) dataset for 20 typologically diverse African languages. We discuss th… (voir plus)e challenges in annotating POS for these languages using the UD (universal dependencies) guidelines. We conducted extensive POS baseline experiments using conditional random field and several multilingual pre-trained language models. We applied various cross-lingual transfer models trained with data available in UD. Evaluating on the MasakhaPOS dataset, we show that choosing the best transfer language(s) in both single-source and multi-source setups greatly improves the POS tagging performance of the target languages, in particular when combined with cross-lingual parameter-efficient fine-tuning methods. Crucially, transferring knowledge from a language that matches the language family and morphosyntactic properties seems more effective for POS tagging in unseen languages.
Measuring Progress in Fine-grained Vision-and-Language Understanding
Emanuele Bugliarello
Laurent Sartran
Lisa Anne Hendricks
Aida Nematzadeh
While pretraining on large-scale image–text data from the Web has facilitated rapid progress on many vision-and-language (V&L) tasks, rece… (voir plus)nt work has demonstrated that pretrained models lack “fine-grained” understanding, such as the ability to recognise relationships, verbs, and numbers in images. This has resulted in an increased interest in the community to either develop new benchmarks or models for such capabilities. To better understand and quantify progress in this direction, we investigate four competitive V&L models on four fine-grained benchmarks. Through our analysis, we find that X-VLM (Zeng et al., 2022) consistently outperforms other baselines, and that modelling innovations can impact performance more than scaling Web data, which even degrades performance sometimes. Through a deeper investigation of X-VLM, we highlight the importance of both novel losses and rich data sources for learning fine-grained skills. Finally, we inspect training dynamics, and discover that for some tasks, performance peaks early in training or significantly fluctuates, never converging.
Membership Inference Attacks Against Temporally Correlated Data in Deep Reinforcement Learning
While significant research advances have been made in the field of deep reinforcement learning, there have been no concrete adversarial atta… (voir plus)ck strategies in literature tailored for studying the vulnerability of deep reinforcement learning algorithms to membership inference attacks. In such attacking systems, the adversary targets the set of collected input data on which the deep reinforcement learning algorithm has been trained. To address this gap, we propose an adversarial attack framework designed for testing the vulnerability of a state-of-the-art deep reinforcement learning algorithm to a membership inference attack. In particular, we design a series of experiments to investigate the impact of temporal correlation, which naturally exists in reinforcement learning training data, on the probability of information leakage. Moreover, we compare the performance of \emph{collective} and \emph{individual} membership attacks against the deep reinforcement learning algorithm. Experimental results show that the proposed adversarial attack framework is surprisingly effective at inferring data with an accuracy exceeding
MixupE: Understanding and improving Mixup from directional derivative perspective
Yingtian Zou
Wai Hoh Tang
Hieu Pham
Juho Kannala
Arno Solin
Motor cortex latent dynamics 1 encode arm movement direction and 2 urgency independently 3
Andrea Colins Rodriguez
Matthew G Perich
Lee Miller
Mark D. Humphries
10 The fluid movement of an arm is controlled by multiple parameters that can be set 11 independently. Recent studies argue that arm moveme… (voir plus)nts are generated by the collective 12 dynamics of neurons in motor cortex. But how these collective dynamics simultaneously encode 13 and control multiple parameters of movement is an open question. Using a task where monkeys 14 made sequential, varied arm movements, we show that the direction and urgency of arm 15 movements are simultaneously encoded in the low-dimensional trajectories of population 16 activity: each movement’s direction by a fixed, looped neural trajectory and its urgency by how 17 quickly that trajectory was traversed. Network models showed this latent coding is potentially 18 advantageous as it allows the direction and urgency of arm movement to be independently 19 controlled. Our results suggest how low-dimensional neural dynamics can define multiple 20 parameters of goal-directed movement simultaneously. 21