Publications

Clinically Plausible Pathology-Anatomy Disentanglement in Patient Brain MRI with Structured Variational Priors
Anjun Hu
Jean-Pierre R. Falet
Douglas Arnold
Sotirios A. Tsaftaris
We propose a hierarchically structured variational inference model for accurately disentangling observable evidence of disease (e.g. brain l… (voir plus)esions or atrophy) from subject-specific anatomy in brain MRIs. With flexible, partially autoregressive priors, our model (1) addresses the subtle and fine-grained dependencies that typically exist between anatomical and pathological generating factors of an MRI to ensure the clinical validity of generated samples; (2) preserves and disentangles finer pathological details pertaining to a patient's disease state. Additionally, we experiment with an alternative training configuration where we provide supervision to a subset of latent units. It is shown that (1) a partially supervised latent space achieves a higher degree of disentanglement between evidence of disease and subject-specific anatomy; (2) when the prior is formulated with an autoregressive structure, knowledge from the supervision can propagate to the unsupervised latent units, resulting in more informative latent representations capable of modelling anatomy-pathology interdependencies.
Teaching Algorithmic Reasoning via In-context Learning
Azade Nova
Behnam Neyshabur
Hanie Sedghi
On the Compositional Generalization Gap of In-Context Learning
Pretrained large generative language models have shown great performance on many tasks, but exhibit low compositional generalization abiliti… (voir plus)es. Scaling such models has been shown to improve their performance on various NLP tasks even just by conditioning them on a few examples to solve the task without any fine-tuning (also known as in-context learning). In this work, we look at the gap between the in-distribution (ID) and out-of-distribution (OOD) performance of such models in semantic parsing tasks with in-context learning. In the ID settings, the demonstrations are from the same split (test or train) that the model is being evaluated on, and in the OOD settings, they are from the other split. We look at how the relative generalization gap of in-context learning evolves as models are scaled up. We evaluate four model families, OPT, BLOOM, CodeGen and Codex on three semantic parsing datasets, CFQ, SCAN and GeoQuery with different number of exemplars, and observe a trend of decreasing relative generalization gap as models are scaled up.
NeurIPS 2022 Competition: Driving SMARTS
Amir Hossein Rasouli
R. Goebel
Matthew E. Taylor
Iuliia Kotseruba
Soheil Alizadeh
Tianpei Yang
Montgomery Alban
Florian Shkurti
Yuzheng Zhuang
Adam Ścibior
Kasra Rezaee
Animesh Garg
Jun Luo
Weinan Zhang
Xinyu Wang
Xiangshan Chen
PatchBlender: A Motion Prior for Video Transformers
Yale Song
R Devon Hjelm
Neel Joshi
A. Chandar
SVRG meets AdaGrad: painless variance reduction
BLOOM: A 176B-Parameter Open-Access Multilingual Language Model
Teven Le Scao
Angela Fan
Christopher Akiki
Ellie Pavlick
Suzana Ili'c
Daniel Hesslow
Roman Castagn'e
Alexandra Luccioni
François Yvon
Matthias Gall'e
J. Tow
Alexander M. Rush
Stella Biderman
Alex Webson
Pawan Sasanka Ammanamanchi
Thomas Wang
Benoı̂t Sagot
Niklas Muennighoff
Albert Villanova del Moral
Olatunji Ruwase … (voir 372 de plus)
Rachel Bawden
Stas Bekman
Angelina McMillan-Major
Iz Beltagy
Huu Nguyen
Lucile Saulnier
Samson Tan
Pedro Ortiz Suarez
Victor Sanh
Hugo Laurençon
Yacine Jernite
Julien Launay
Margaret Mitchell
Colin Raffel
Aaron Gokaslan
Adi Simhi
Aitor Soroa
Alham Fikri Aji
Amit Alfassy
Anna Rogers
Ariel Kreisberg Nitzav
Canwen Xu
Chenghao Mou
Christopher Klamm
Colin D. Leong
Daniel Van Strien
Dragomir R. Radev
Eduardo González Ponferrada
Efrat Levkovizh
Ethan Kim
Eyal Bar Natan
Francesco De Toni
Gérard Dupont
Germán Kruszewski
Giada Pistilli
Hady Elsahar
Hamza Benyamina
Hieu Tran
Ian W. Yu
Idris Abdulmumin
Isaac L. Johnson
Itziar Gonzalez-Dios
Javier de la Rosa
Jenny Chim
Jesse Dodge
Jian Zhu
Jonathan Chang
Jörg Frohberg
Josephine L. Tobing
J. Bhattacharjee
Khalid Almubarak
Kimbo Chen
Kyle Lo
Leandro Von Werra
Leon Weber
Long Phan
Loubna Ben allal
Ludovic Tanguy
Manuel Romero Muñoz
Maraim Masoud
Mar'ia Grandury
Mario Šaško
Max Huang
Maximin Coavoux
Mayank Singh
Mike Tian-Jian Jiang
Vu Minh Chien
Mohammad Ali Jauhar
Mustafa Ghaleb
Nishant Subramani
Nora Kassner
Nurulaqilla Khamis
Olivier Nguyen
Omar Espejel
Ona de Gibert
Paulo Villegas
Pierre Colombo
Priscilla A. Amuok
Quentin Lhoest
Rheza Harliman
Rishi Bommasani
Roberto Luis L'opez
Rui Ribeiro
Salomey Osei
Sampo Pyysalo
Sebastian Nagel
Shamik Bose
Shamsuddeen Hassan Muhammad
Shanya Sharma Sharma
Shayne Longpre
Somaieh Nikpoor
S. Silberberg
Suhas Pai
Sydney Zink
Tiago Timponi Torrent
Timo Schick
Tristan Thrush
Valentin Danchev
Vassilina Nikoulina
Veronika Laippala
Violette Lepercq
Vrinda Prabhu
Zaid Alyafeai
Zeerak Talat
Arun Raja
Benjamin Heinzerling
Chenglei Si
Elizabeth E Salesky
Sabrina J. Mielke
Wilson Y. Lee
Abheesht Sharma
Andrea Santilli
Antoine Chaffin
Arnaud Stiegler
Debajyoti Datta
Eliza Szczechla
Gunjan Chhablani
Han Wang
Harshit Pandey
Hendrik. Strobelt
Jason Alan Fries
Jos Rozen
Leo Gao
Lintang A. Sutawika
M. Saiful Bari
Maged S. Al-shaibani
Matteo Manica
Nihal V. Nayak
Ryan Teehan
Samuel Albanie
Sheng Shen
Srulik Ben-David
Stephen H. Bach
Taewoon Kim
T. Bers
Thibault F'evry
Trishala Neeraj
Urmish Thakker
Vikas Raunak
Xiang Tang
Zheng Xin Yong
Zhiqing Sun
Shaked Brody
Y. Uri
Hadar Tojarieh
Adam Roberts
Hyung Won Chung
Jaesung Tae
Jason Phang
Ofir Press
Conglong Li
D. Narayanan
Hatim Bourfoune
Jared Casper
Jeff Rasley
Max Ryabinin
Mayank Mishra
Minjia Zhang
Mohammad Shoeybi
Myriam Peyrounette
Nicolas Patry
Nouamane Tazi
Omar Sanseviero
Patrick von Platen
Pierre Cornette
Pierre Franccois Lavall'ee
R'emi Lacroix
Samyam Rajbhandari
Sanchit Gandhi
Shaden Smith
St'ephane Requena
Suraj Patil
Tim Dettmers
Ahmed Baruwa
Amanpreet Singh
Anastasia Cheveleva
Anne-Laure Ligozat
Arjun Subramonian
Aur'elie N'ev'eol
Charles Lovering
Dan Garrette
D. Tunuguntla
Ehud Reiter
Ekaterina Taktasheva
E. Voloshina
Eli Bogdanov
Genta Indra Winata
Hailey Schoelkopf
Jan-Christoph Kalo
Jekaterina Novikova
Jessica Zosa Forde
Xiangru Tang
Jungo Kasai
Ken Kawamura
Liam Hazan
Marine Carpuat
Miruna-adriana Clinciu
Najoung Kim
Newton Cheng
O. Serikov
Omer Antverg
Oskar van der Wal
Rui Zhang
Ruochen Zhang
Sebastian Gehrmann
Shachar Mirkin
S. Pais
Tatiana Shavrina
Thomas Scialom
Tian Yun
Tomasz Limisiewicz
Verena Teresa Rieser
Vitaly Protasov
V. Mikhailov
Yada Pruksachatkun
Yonatan Belinkov
Zachary Bamberger
Zdenˇek Kasner
A. Pestana
Amir Feizpour
Ammar Khan
Amy Faranak
A. Santos
Anthony Hevia
Antigona Unldreaj
Arash Aghagol
Arezoo Abdollahi
Aycha Tammour
Azadeh Hajihosseini
Bahareh Behroozi
Benjamin A. Ajibade
B. Saxena
Carlos Muñoz Ferrandis
Danish Contractor
D. Lansky
Davis David
Douwe Kiela
Duong Anh Nguyen
Edward Chwee Kheng. Tan
Emi Baylor
Ezinwanne Ozoani
F. Mirza
Frankline Ononiwu
Habib Rezanejad
H.A. Jones
Indrani Bhattacharya
Irene Solaiman
Irina Sedenko
Isar Nejadgholi
J. Passmore
Joshua Seltzer
Julio Bonis Sanz
Karen Fort
Livia Macedo Dutra
Mairon Samagaio
Maraim Elbadri
Margot Mieskes
Marissa Kumar Gerchick
Martha Akinlolu
Michael McKenna
Mike Qiu
M. Ghauri
Mykola Burynok
Nafis Abrar
Nazneen Fatema Rajani
Nour Elkott
N. Fahmy
Olanrewaju Samuel
Ran An
R. Kromann
Ryan Hao
Samira Hassan Alizadeh
Sarmad Shubber
Silas L. Wang
Sourav Roy
Sylvain Viguier
Thanh-Cong Le
Tobi Oyebade
T. Le
Yoyo Yang
Zach Nguyen
Abhinav R. Kashyap
Alfredo Palasciano
Alison Callahan
Anima Shukla
Antonio Miranda-Escalada
Ayush Singh
Benjamin Beilharz
Bo Wang
Caio Matheus Fonseca De Brito
Chenxi Zhou
Chirag Jain
Chuxin Xu
Cl'ementine Fourrier
Daniel Le'on Perin'an
Daniel Molano
Dian Yu
Enrique Manjavacas
Fabio Barth
Florian Fuhrimann
Gabriel Altay
Giyaseddin Bayrak
Gully Burns
Helena U. Vrabec
I. Bello
Isha Dash
J. Kang
John Michael Giorgi
Jonas Golde
J. Posada
Karthi Sivaraman
Lokesh Bulchandani
Li Li
Luisa Shinzato
Madeleine Hahn de Bykhovetz
Maiko Takeuchi
Marc Pamies
M. A. Castillo
Marianna Nezhurina
Mario Sanger
Matthias Samwald
Michael Joseph Cullan
Michael Weinberg
Michiel De Wolf
Mina Mihaljcic
Minna Liu
Moritz Freidank
Myungsun Kang
Natasha Seelam
Nathan Dahlberg
Nicholas Michio Broad
Nikolaus Muellner
Pascale Fung
Patricia Haller
Ramya Chandrasekhar
Patrick Haller
Renata Eisenberg
Robert Martin
Rodrigo Canalli
Rosaline Su
Ruisi Su
Samuel Cahyawijaya
Samuele Garda
Shlok S Deshmukh
Shubhanshu Mishra
Sid Kiblawi
Simon Ott
Sinee Sang-aroonsiri
Srishti Kumar
Stefan Schweter
Sushil Pratap Bharati
Tanmay Laud
Th'eo Gigant
Tomoya Kainuma
Wojciech Kusa
Yanis Labrak
Yashasvi Bajaj
Yash Venkatraman
Yifan Xu
Ying Xu
Yu Xu
Zhijun Tan
Zhongli Xie
Zifan Ye
Mathilde Le Bras
Younes Belkada
Thomas Wolf
Flaky Performances when Pretraining on Relational Databases
A debriefing tool to acquire non-technical skills in trauma courses
Fabio Botelho
Jason M. Harley
Natalie Yanchar
Simone Abib
Ilana Bank
Dissociable brain structural asymmetry patterns reveal unique phenome-wide profiles
Ralph Adolphs
Lynn K. Paul
Vaibhav Sharma
Joern Diedrichsen
B. T. Thomas Yeo
Object-centric causal representation learning
PaReco: patched clones and missed patches among the divergent variants of a software family
Poedjadevie Kadjel Ramkisoen
John Businge
Brent van Bladel
Alexandre Decan
Serge Demeyer
Coen De Roover
Re-using whole repositories as a starting point for new projects is often done by maintaining a variant fork parallel to the original. Howev… (voir plus)er, the common artifacts between both are not always kept up to date. As a result, patches are not optimally integrated across the two repositories, which may lead to sub-optimal maintenance between the variant and the original project. A bug existing in both repositories can be patched in one but not the other (we see this as a missed opportunity) or it can be manually patched in both probably by different developers (we see this as effort duplication). In this paper we present a tool (named PaReCo) which relies on clone detection to mine cases of missed opportunity and effort duplication from a pool of patches. We analyzed 364 (source to target) variant pairs with 8,323 patches resulting in a curated dataset containing 1,116 cases of effort duplication and 1,008 cases of missed opportunities. We achieve a precision of 91%, recall of 80%, accuracy of 88%, and F1-score of 85%. Furthermore, we investigated the time interval between patches and found out that, on average, missed patches in the target variants have been introduced in the source variants 52 weeks earlier. Consequently, PaReCo can be used to manage variability in “time” by automatically identifying interesting patches in later project releases to be backported to supported earlier releases.