Portrait de Jonathan Pilault n'est pas disponible

Jonathan Pilault

Doctorat - Polytechnique
Superviseur⋅e principal⋅e
Sujets de recherche
Traitement du langage naturel

Publications

On Extractive and Abstractive Neural Document Summarization with Transformer Language Models
Raymond Li
Sandeep Subramanian
We present a method to produce abstractive summaries of long documents that exceed several thousand words via neural abstractive summarizati… (voir plus)on. We perform a simple extractive step before generating a summary, which is then used to condition the transformer language model on relevant information before being tasked with generating a summary. We also show that this approach produces more abstractive summaries compared to prior work that employs a copy mechanism while still achieving higher ROUGE scores. We provide extensive comparisons with strong baseline methods, prior state of the art work as well as multiple variants of our approach including those using only transformers, only extractive techniques and combinations of the two. We examine these models using four different summarization tasks and datasets: arXiv papers, PubMed papers, the Newsroom and BigPatent datasets. We find that transformer based methods produce summaries with fewer n-gram copies, leading to n-gram copying statistics that are more similar to human generated abstracts. We include a human evaluation, finding that transformers are ranked highly for coherence and fluency, but purely extractive methods score higher for informativeness and relevance. We hope that these architectures and experiments may serve as strong points of comparison for future work. Note: The abstract above was collaboratively written by the authors and one of the models presented in this paper based on an earlier draft of this paper.
Learning to Summarize Long Texts with Memory Compression and Transfer
On the impressive performance of randomly weighted encoders in summarization tasks
In this work, we investigate the performance of untrained randomly initialized encoders in a general class of sequence to sequence models an… (voir plus)d compare their performance with that of fully-trained encoders on the task of abstractive summarization. We hypothesize that random projections of an input text have enough representational power to encode the hierarchical structure of sentences and semantics of documents. Using a trained decoder to produce abstractive text summaries, we empirically demonstrate that architectures with untrained randomly initialized encoders perform competitively with respect to the equivalent architectures with fully-trained encoders. We further find that the capacity of the encoder not only improves overall model generalization but also closes the performance gap between untrained randomly initialized and full-trained encoders. To our knowledge, it is the first time that general sequence to sequence models with attention are assessed for trained and randomly projected representations on abstractive summarization.