Publications

Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models
Avi Singh
John D Co-Reyes
Piyush Patil
Xavier Garcia
Peter J. Liu
James Harrison
Jaehoon Lee
Aaron T Parisi
Abhishek Kumar
A. Alemi
Alex Rizkowsky
Azade Nova
Ben Adlam
Bernd Bohnet
Hanie Sedghi
Gamaleldin Fathy Elsayed
Igor Mordatch … (voir 21 de plus)
Isabelle Simpson
Izzeddin Gur
Jasper Snoek
Jeffrey Pennington
Jiri Hron
Kathleen Kenealy
Kevin Swersky
Kshiteej Mahajan
Laura Culp
Lechao Xiao
Maxwell Bileschi
Noah Constant
Roman Novak
Rosanne Liu
Tris Brian Warkentin
Yundi Qian
Ethan Dyer
Behnam Neyshabur
Jascha Sohl-Dickstein
Yamini Bansal
Noah Fiedel
Fine-tuning language models~(LMs) on human-generated data remains a prevalent practice. However, the performance of such models is often lim… (voir plus)ited by the quantity and diversity of high-quality human data. In this paper, we explore whether we can go beyond human data on tasks where we have access to scalar feedback, for example, on math problems where one can verify correctness. To do so, we investigate a simple self-training method based on expectation-maximization, which we call ReST
Brain decoding of the Human Connectome Project tasks in a dense individual fMRI dataset
Shima Rastegarnia
Elizabeth DuPre
Basile Pinsard
Lune P Bellec
Can Retriever-Augmented Language Models Reason? The Blame Game Between the Retriever and the Language Model
Augmenting pretrained language models with retrievers to select the supporting documents has shown promise in effectively solving common NLP… (voir plus) problems, including language modeling and question answering, in an interpretable way. In this paper, we first study the strengths and weaknesses of different retriever-augmented language models (REALM,
Current AI applications in neurology: Brain imaging
Joshua D. Durso-Finley
Jean-Pierre R. Falet
Raghav Mehta
Douglas Arnold
Nick Pawlowski
DiPS: Discriminative Pseudo-Label Sampling with Self-Supervised Transformers for Weakly Supervised Object Localization
Shakeeb Murtaza
Soufiane Belharbi
Aydin Sarraf
Eric Granger
From physics to sentience: Deciphering the semantics of the free-energy principle and evaluating its claims: Comment on "Path integrals, particular kinds, and strange things" by Karl Friston et al.
Adam Safron
Casper Hesp
Gene-metabolite annotation with shortest reactional distance enhances metabolite genome-wide association studies results
Sarah Cherkaoui
Sandra Therrien-Laperriere
Yann Ilboudo
Raphaël Poujol
Pamela Mehanna
Melanie E. Garrett
Marilyn J. Telen
Allison E. Ashley-Koch
Pablo Bartolucci
John D. Rioux
Guillaume Lettre
Christine Des Rosiers
Matthieu Ruiz
Julie G. Hussin
Studies combining metabolomics and genetics, known as metabolite genome-wide association studies (mGWAS), have provided valuable insights in… (voir plus)to our understanding of the genetic control of metabolite levels. However, the biological interpretation of these associations remains challenging due to a lack of existing tools to annotate mGWAS gene-metabolite pairs beyond the use of conservative statistical significance threshold. Here, we computed the shortest reactional distance (SRD) based on the curated knowledge of the KEGG database to explore its utility in enhancing the biological interpretation of results from three independent mGWAS, including a case study on sickle cell disease patients. Results show that, in reported mGWAS pairs, there is an excess of small SRD values and that SRD values and p-values significantly correlate, even beyond the standard conservative thresholds. The added-value of SRD annotation is shown for identification of potential false negative hits, exemplified by the finding of gene-metabolite associations with SRD ≤1 that did not reach standard genome-wide significance cut-off. The wider use of this statistic as an mGWAS annotation would prevent the exclusion of biologically relevant associations and can also identify errors or gaps in current metabolic pathway databases. Our findings highlight the SRD metric as an objective, quantitative and easy-to-compute annotation for gene-metabolite pairs that can be used to integrate statistical evidence to biological networks.
Growth of TiO2 single crystals by the Verneuil method at different gas flow ratio
Xudong Liu
Hanshu Ma
Wei Wang
Yongqi Hu
Xudong Sun
Large language models: What could they do for neurology?
A large-scale exploratory study of android sports apps in the google play store
Bhagya Chembakottu
Heng Li
Measuring the Knowledge Acquisition-Utilization Gap in Pretrained Language Models
While pre-trained language models (PLMs) have shown evidence of acquiring vast amounts of knowledge, it remains unclear how much of this par… (voir plus)ametric knowledge is actually usable in performing downstream tasks. We propose a systematic framework to measure parametric knowledge utilization in PLMs. Our framework first extracts knowledge from a PLM's parameters and subsequently constructs a downstream task around this extracted knowledge. Performance on this task thus depends exclusively on utilizing the model's possessed knowledge, avoiding confounding factors like insufficient signal. As an instantiation, we study factual knowledge of PLMs and measure utilization across 125M to 13B parameter PLMs. We observe that: (1) PLMs exhibit two gaps - in acquired vs. utilized knowledge, (2) they show limited robustness in utilizing knowledge under distribution shifts, and (3) larger models close the acquired knowledge gap but the utilized knowledge gap remains. Overall, our study provides insights into PLMs' capabilities beyond their acquired knowledge.
Predictive inference for travel time on transportation networks
Aurélie Labbe
Denis Larocque