Publications

Learning Syntactic Monoids from Samples by extending known Algorithms for learning State Machines
Simon Dieck
Sicco Verwer
François Coste
Faissal Ouardi
For the inference of regular languages, most current methods learn a version of deterministic finite automata. Syntactic monoids are an alte… (voir plus)rnative representation of regular languages, which have some advantages over automata. For example, traces can be parsed starting from any index and the star-freeness of the language they represent can be checked in polynomial time. But, to date, there existed no passive learning algorithm for syntactic monoids. In this paper, we prove that known state-merging algorithms for learning deterministic finite automata can be instrumented to learn syntactic monoids instead, by using as the input a special structure proposed in this paper: the interfix-graph. Further, we introduce a method to encode frequencies on the interfix-graph, such that models can also be learned from only positive traces. We implemented this structure and performed experiments with both traditional data and data containing only positive traces. As such this work answers basic theoretical and experimental questions regarding a novel passive learning algorithm for syntactic monoids.
List-GRAND: A Practical Way to Achieve Maximum Likelihood Decoding
Syed Mohsin Abbas
Marwan Jalaleddine
Guessing random additive noise decoding (GRAND) is a recently proposed universal maximum likelihood (ML) decoder for short-length and high-r… (voir plus)ate linear block codes. Soft-GRAND (SGRAND) is a prominent soft-input GRAND variant, outperforming the other GRAND variants in decoding performance; nevertheless, SGRAND is not suitable for parallel hardware implementation. Ordered Reliability Bits-GRAND (ORBGRAND) is another soft-input GRAND variant that is suitable for parallel hardware implementation; however, it has lower decoding performance than SGRAND. In this article, we propose List-GRAND (LGRAND), a technique for enhancing the decoding performance of ORBGRAND to match the ML decoding performance of SGRAND. Numerical simulation results show that LGRAND enhances ORBGRAND’s decoding performance by 0.5–0.75 dB for channel codes of various classes at a target frame error rate (FER) of 10−7. For linear block codes of length 127/128 and different code rates, LGRAND’s VLSI implementation can achieve an average information throughput of 47.27–51.36 Gb/s. In comparison to ORBGRAND’s VLSI implementation, the proposed LGRAND hardware has a 4.84% area overhead.
A Literature Review on Detecting, Verifying, and Mitigating Online Misinformation
Arezo Bodaghi
Ketra A. Schmitt
Pierre Watine
Social media use has transformed communication and made social interaction more accessible. Public microblogs allow people to share and acce… (voir plus)ss news through existing and social-media-created social connections and access to public news sources. These benefits also create opportunities for the spread of false information. False information online can mislead people, decrease the benefits derived from social media, and reduce trust in genuine news. We divide false information into two categories: unintentional false information, also known as misinformation; and intentionally false information, also known as disinformation and fake news. Given the increasing prevalence of misinformation, it is imperative to address its dissemination on social media platforms. This survey focuses on six key aspects related to misinformation: 1) clarify the definition of misinformation to differentiate it from intentional forms of false information; 2) categorize proposed approaches to manage misinformation into three types: detection, verification, and mitigation; 3) review the platforms and languages for which these techniques have been proposed and tested; 4) describe the specific features that are considered in each category; 5) compare public datasets created to address misinformation and categorize into prelabeled content-only datasets and those including users and their connections; and 6) survey fact-checking websites that can be used to verify the accuracy of information. This survey offers a comprehensive and unprecedented review of misinformation, integrating various methodological approaches, datasets, and content-, user-, and network-based approaches, which will undoubtedly benefit future research in this field.
Lower Bounds for Active Automata Learning.
Loes Kruger
Bharat Garhewal
François Coste
Frits W. Vaandrager
Faissal Ouardi
MAGNIFICo: Evaluating the In-Context Learning Ability of Large Language Models to Generalize to Novel Interpretations
Arkil Patel
Satwik Bhattamishra
Maintenance Cost of Software Ecosystem Updates
Solomon Berhe
M. Maynard
MasakhaNEWS: News Topic Classification for African languages
Marek Masiak
Israel Abebe Azime
Jesujoba Oluwadara Alabi
Atnafu Lambebo Tonja
Christine Mwase
Odunayo Ogundepo
Bonaventure F. P. Dossou
Akintunde Oladipo
Doreen Nixdorf
Chris Emezue
sana Sabah al-azzawi
Blessing Kudzaishe Sibanda
Davis David
Lolwethu Ndolela
Jonathan Mukiibi
Tunde Oluwaseyi Ajayi
Tatiana Moteu Ngoli
Brian Odhiambo
Abraham Toluwase Owodunni … (voir 42 de plus)
Nnaemeka Casmir Obiefuna
Shamsuddeen Hassan Muhammad
Saheed Salahudeen Abdullahi
Mesay Gemeda Yigezu
Tajuddeen Gwadabe
Idris Abdulmumin
Mahlet Taye Bame
Oluwabusayo Olufunke Awoyomi
Iyanuoluwa Shode
Tolulope Anu Adelani
Habiba Abdulganiy Kailani
Abdul-Hakeem Omotayo
Adetola Adeeko
Afolabi Abeeb
Aremu Anuoluwapo
Olanrewaju Samuel
Clemencia Siro
Wangari Kimotho
Onyekachi Ogbu
CHINEDU EMMANUEL MBONU
Chiamaka Ijeoma Chukwuneke
Samuel Fanijo
Jessica Ojo
Oyinkansola Fiyinfoluwa Awosan
Tadesse Kebede Guge
Toadoum Sari Sakayo
Pamela Nyatsine
Freedmore Sidume
Oreen Yousuf
Mardiyyah Oduwole
USSEN ABRE KIMANUKA
Kanda Patrick Tshinu
Thina Diko
Siyanda Nxakama
Abdulmejid Tuni Johar
Sinodos Gebre
Muhidin A. Mohamed
Shafie Abdi Mohamed
Fuad Mire Hassan
Moges Ahmed Mehamed
Evrard Ngabire
Pontus Stenetorp
African languages are severely under-represented in NLP research due to lack of datasets covering several NLP tasks. While there are individ… (voir plus)ual language specific datasets that are being expanded to different tasks, only a handful of NLP tasks (e.g. named entity recognition and machine translation) have standardized benchmark datasets covering several geographical and typologically-diverse African languages. In this paper, we develop MasakhaNEWS -- a new benchmark dataset for news topic classification covering 16 languages widely spoken in Africa. We provide an evaluation of baseline models by training classical machine learning models and fine-tuning several language models. Furthermore, we explore several alternatives to full fine-tuning of language models that are better suited for zero-shot and few-shot learning such as cross-lingual parameter-efficient fine-tuning (like MAD-X), pattern exploiting training (PET), prompting language models (like ChatGPT), and prompt-free sentence transformer fine-tuning (SetFit and Cohere Embedding API). Our evaluation in zero-shot setting shows the potential of prompting ChatGPT for news topic classification in low-resource African languages, achieving an average performance of 70 F1 points without leveraging additional supervision like MAD-X. In few-shot setting, we show that with as little as 10 examples per label, we achieved more than 90\% (i.e. 86.0 F1 points) of the performance of full supervised training (92.6 F1 points) leveraging the PET approach.
MasakhaPOS: Part-of-Speech Tagging for Typologically Diverse African languages
Cheikh M. Bamba Dione
Peter Nabende
Jesujoba Oluwadara Alabi
Thapelo Sindane
Happy Buzaaba
Shamsuddeen Hassan Muhammad
Chris Emezue
Perez Ogayo
Aremu Anuoluwapo
Catherine Gitau
Derguene Mbaye
Jonathan Mukiibi
Blessing Kudzaishe Sibanda
Bonaventure F. P. Dossou
Andiswa Bukula
Rooweither Mabuya
Allahsera Auguste Tapo
Edwin Munkoh-Buabeng
Victoire Memdjokam Koagne … (voir 24 de plus)
Fatoumata Ouoba Kabore
Amelia Taylor
Godson Kalipe
Tebogo Macucwa
Vukosi Marivate
Tajuddeen Gwadabe
Mboning Tchiaze Elvis
Ikechukwu Onyenwe
Gratien Atindogbe
Tolulope Anu Adelani
Idris Akinade
Olanrewaju Samuel
Marien Nahimana
Théogène Musabeyezu
Emile Niyomutabazi
Ester Chimhenga
Kudzai Gotosa
Patrick Mizha
Apelete Agbolo
Seydou Traore
Chinedu Uchechukwu
Aliyu Yusuf
Muhammad Abdullahi
Dietrich Klakow
In this paper, we present AfricaPOS, the largest part-of-speech (POS) dataset for 20 typologically diverse African languages. We discuss the… (voir plus) challenges in annotating POS for these languages using the universal dependencies (UD) guidelines. We conducted extensive POS baseline experiments using both conditional random field and several multilingual pre-trained language models. We applied various cross-lingual transfer models trained with data available in the UD. Evaluating on the AfricaPOS dataset, we show that choosing the best transfer language(s) in both single-source and multi-source setups greatly improves the POS tagging performance of the target languages, in particular when combined with parameter-fine-tuning methods. Crucially, transferring knowledge from a language that matches the language family and morphosyntactic properties seems to be more effective for POS tagging in unseen languages.
Measuring Progress in Fine-grained Vision-and-Language Understanding
Emanuele Bugliarello
Laurent Sartran
Lisa Anne Hendricks
Aida Nematzadeh
While pretraining on large-scale image–text data from the Web has facilitated rapid progress on many vision-and-language (V&L) tasks, rece… (voir plus)nt work has demonstrated that pretrained models lack “fine-grained” understanding, such as the ability to recognise relationships, verbs, and numbers in images. This has resulted in an increased interest in the community to either develop new benchmarks or models for such capabilities. To better understand and quantify progress in this direction, we investigate four competitive V&L models on four fine-grained benchmarks. Through our analysis, we find that X-VLM (Zeng et al., 2022) consistently outperforms other baselines, and that modelling innovations can impact performance more than scaling Web data, which even degrades performance sometimes. Through a deeper investigation of X-VLM, we highlight the importance of both novel losses and rich data sources for learning fine-grained skills. Finally, we inspect training dynamics, and discover that for some tasks, performance peaks early in training or significantly fluctuates, never converging.
Mechanistic Mode Connectivity
Ekdeep Singh Lubana
Eric J Bigelow
Robert P. Dick
Hidenori Tanaka
We study neural network loss landscapes through the lens of mode connectivity, the observation that minimizers of neural networks retrieved … (voir plus)via training on a dataset are connected via simple paths of low loss. Specifically, we ask the following question: are minimizers that rely on different mechanisms for making their predictions connected via simple paths of low loss? We provide a definition of mechanistic similarity as shared invariances to input transformations and demonstrate that lack of linear connectivity between two models implies they use dissimilar mechanisms for making their predictions. Relevant to practice, this result helps us demonstrate that naive fine-tuning on a downstream dataset can fail to alter a model's mechanisms, e.g., fine-tuning can fail to eliminate a model's reliance on spurious attributes. Our analysis also motivates a method for targeted alteration of a model's mechanisms, named connectivity-based fine-tuning (CBFT), which we analyze using several synthetic datasets for the task of reducing a model's reliance on spurious attributes.
Membership Inference Attacks Against Temporally Correlated Data in Deep Reinforcement Learning
Maziar Gomrokchi
Susan Amin
Hossein Aboutalebi
Alexander Wong
While significant research advances have been made in the field of deep reinforcement learning, there have been no concrete adversarial atta… (voir plus)ck strategies in literature tailored for studying the vulnerability of deep reinforcement learning algorithms to membership inference attacks. In such attacking systems, the adversary targets the set of collected input data on which the deep reinforcement learning algorithm has been trained. To address this gap, we propose an adversarial attack framework designed for testing the vulnerability of a state-of-the-art deep reinforcement learning algorithm to a membership inference attack. In particular, we design a series of experiments to investigate the impact of temporal correlation, which naturally exists in reinforcement learning training data, on the probability of information leakage. Moreover, we compare the performance of collective and individual membership attacks against the deep reinforcement learning algorithm. Experimental results show that the proposed adversarial attack framework is surprisingly effective at inferring data with an accuracy exceeding 84% in individual and 97% in collective modes in three different continuous control Mujoco tasks, which raises serious privacy concerns in this regard. Finally, we show that the learning state of the reinforcement learning algorithm influences the level of privacy breaches significantly.
Meta Pseudo Labels for Anomaly Detection via Partially Observed Anomalies
Sinong Zhao
Zhaoyang Yu
Xiaofei Wang
T. Marbach
Gang Wang
X. Liu