Publications

Automatic Head and Neck Tumor segmentation and outcome prediction relying on FDG-PET/CT images: Findings from the second edition of the HECKTOR challenge
Vincent Andrearczyk
Valentin Oreiller
Sarah Boughdad
Catherine Cheze Le Rest
Olena Tankyevych
Hesham M. Elhalawani
Mario Jreige
John O. Prior
Dimitris Visvikis
Mathieu Hatt
Adrien Depeursinge
Balaur: Language Model Pretraining with Lexical Semantic Relations
Andrei Mircea
Brain decoding of the Human Connectome Project tasks in a dense individual fMRI dataset
Shima Rastegarnia
Marie St-Laurent
Elizabeth DuPre
Basile Pinsard
Can Retriever-Augmented Language Models Reason? The Blame Game Between the Retriever and the Language Model
Parishad BehnamGhader
Santiago Miret
Augmenting pretrained language models with retrievers to select the supporting documents has shown promise in effectively solving common NLP… (see more) problems, including language modeling and question answering, in an interpretable way. In this paper, we first study the strengths and weaknesses of different retriever-augmented language models (REALM,
Cross-lingual Open-Retrieval Question Answering for African Languages
Odunayo Ogundepo
Tajuddeen Gwadabe
Clara E. Rivera
Jonathan H. Clark
Sebastian Ruder
Bonaventure F. P. Dossou
Abdou Aziz DIOP
Claytone Sikasote
Gilles Q. Hacheme
Happy Buzaaba
Ignatius Majesty Ezeani
Rooweither Mabuya
Salomey Osei
Chris Emezue
Albert Njoroge Kahira
Shamsuddeen Hassan Muhammad
Akintunde Oladipo
Abraham Toluwase Owodunni
Atnafu Lambebo Tonja … (see 24 more)
Iyanuoluwa Shode
Akari Asai
Aremu Anuoluwapo
Ayodele Awokoya
Bernard Opoku
Chiamaka Ijeoma Chukwuneke
Christine Mwase
Clemencia Siro
Stephen Arthur
Tunde Oluwaseyi Ajayi
V. Otiende
Andre Niyongabo Rubungo
B. Sinkala
Daniel A. Ajisafe
Emeka Onwuegbuzia
Falalu Lawan
Ibrahim Ahmad
Jesujoba Alabi
CHINEDU EMMANUEL MBONU
Mofetoluwa Adeyemi
Mofya Phiri
Orevaoghene Ahia
Ruqayya Nasir Iro
Sonia Adhiambo
Cross-lingual Open-Retrieval Question Answering for African Languages
Odunayo Ogundepo
Tajuddeen Gwadabe
Clara E. Rivera
Jonathan H. Clark
Sebastian Ruder
Bonaventure F. P. Dossou
Abdou Aziz DIOP
Claytone Sikasote
Gilles HACHEME
Happy Buzaaba
Ignatius Ezeani
Rooweither Mabuya
Salomey Osei
Chris Emezue
Albert Kahira
Shamsuddeen Hassan Muhammad
Akintunde Oladipo
Abraham Toluwase Owodunni
Atnafu Lambebo Tonja … (see 32 more)
Iyanuoluwa Shode
Akari Asai
Tunde Oluwaseyi Ajayi
Clemencia Siro
Stephen Arthur
Mofetoluwa Adeyemi
Orevaoghene Ahia
Aremu Anuoluwapo
Oyinkansola Awosan
Chiamaka Ijeoma Chukwuneke
Bernard Opoku
Ayodele Awokoya
Verrah Akinyi Otiende
Christine Mwase
Boyd Sinkala
Andre Niyongabo Rubungo
Daniel Ajisafe
Emeka Felix Onwuegbuzia
Habib Mbow
Emile Niyomutabazi
Eunice Mukonde
Falalu Lawan
Ibrahim Ahmad
Jesujoba Oluwadara Alabi
Martin Namukombo
CHINEDU EMMANUEL MBONU
Mofya Phiri
Neo Putini
Ndumiso Mngoma
Priscilla A. Amuok
Ruqayya Nasir Iro
Sonia Adhiambo
Current AI applications in neurology: Brain imaging
Joshua D. Durso-Finley
Jean-Pierre R. Falet
Raghav Mehta
Douglas Arnold
Nick Pawlowski
From physics to sentience: Deciphering the semantics of the free-energy principle and evaluating its claims: Comment on "Path integrals, particular kinds, and strange things" by Karl Friston et al.
Zahra Sheikhbahaee
Adam Safron
Casper Hesp
Large language models: What could they do for neurology?
A large-scale exploratory study of android sports apps in the google play store
Bhagya Chembakottu
Heng Li
Measuring the Knowledge Acquisition-Utilization Gap in Pretrained Language Models
Amirhossein Kazemnejad
Mehdi Rezagholizadeh
Prasanna Parthasarathi
Nash Learning from Human Feedback
R'emi Munos
Michal Valko
Daniele Calandriello
M. G. Azar
Mark Rowland
Zhaohan Daniel Guo
Yunhao Tang
Matthieu Geist
Thomas Mesnard
Andrea Michi
Marco Selvi
Sertan Girgin
Nikola Momchev
Olivier Bachem
Daniel J Mankowitz
Bilal Piot
Reinforcement learning from human feedback (RLHF) has emerged as the main paradigm for aligning large language models (LLMs) with human pref… (see more)erences. Typically, RLHF involves the initial step of learning a reward model from human feedback, often expressed as preferences between pairs of text generations produced by a pre-trained LLM. Subsequently, the LLM's policy is fine-tuned by optimizing it to maximize the reward model through a reinforcement learning algorithm. However, an inherent limitation of current reward models is their inability to fully represent the richness of human preferences and their dependency on the sampling distribution. In this study, we introduce an alternative pipeline for the fine-tuning of LLMs using pairwise human feedback. Our approach entails the initial learning of a preference model, which is conditioned on two inputs given a prompt, followed by the pursuit of a policy that consistently generates responses preferred over those generated by any competing policy, thus defining the Nash equilibrium of this preference model. We term this approach Nash learning from human feedback (NLHF). In the context of a tabular policy representation, we present a novel algorithmic solution, Nash-MD, founded on the principles of mirror descent. This algorithm produces a sequence of policies, with the last iteration converging to the regularized Nash equilibrium. Additionally, we explore parametric representations of policies and introduce gradient descent algorithms for deep-learning architectures. To demonstrate the effectiveness of our approach, we present experimental results involving the fine-tuning of a LLM for a text summarization task. We believe NLHF offers a compelling avenue for preference learning and policy optimization with the potential of advancing the field of aligning LLMs with human preferences.