Portrait de Ryan Lowe n'est pas disponible

Ryan Lowe

Alumni

Publications

Towards Policy-Guided Conversational Recommendation with Dialogue Acts
Paul Crook
Y-Lan Boureau
J. Weston
Akbar Karimi
Leonardo Rossi
Andrea Prati
Wenqiang Lei
Xiangnan He
Qingyun Yisong Miao
Richang Wu
Min-Yen Hong
Kan Tat-Seng
Raymond Li
Hannes Schulz
Zujie Liang
Huang Hu
Can Xu
Jian Miao
Lizi Liao … (voir 47 de plus)
Ryuichi Takanobu
Yunshan Ma
Xun Yang
Wenchang Ma
Minlie Huang
Minghao Tu
Iulian Serban
Aaron C. Courville
David Silver
Julian Schrittwieser
K. Simonyan
Ioannis Antonoglou
Aja Huang
A. Guez
Hanlin Zhu
O. Vinyals
Igor Babuschkin
M. Mathieu
Max Jaderberg
Wojciech M. Czar-725 necki
A. Dudzik
Petko Georgiev
Richard Powell
T. Ewalds
Dan Horgan
M. Kroiss
Ivo Danihelka
J. Agapiou
Junhyuk Oh
Valentin Dalibard
David Choi
L. Sifre
Yury Sulsky
Sasha Vezhnevets
James Molloy
Trevor Cai
D. Budden
T. Paine
Ziyu Wang
Tobias Pfaff
Tobias Pohlen
Introduction to NIPS 2017 Competition Track
Sergio Escalera
Markus Weimer
Mikhail Burtsev
Valentin Malykh
Varvara Logacheva
Iulian V. Serban
Alexander Rudnicky
Alan W. Black
Shrimai Prabhumoye
Łukasz Kidziński
Sharada Prasanna Mohanty
Carmichael F. Ong
Jennifer L. Hicks
Sergey Levine
Marcel Salathé
Scott Delp
Iker Huerga
Alexander Grigorenko … (voir 19 de plus)
Leifur Thorbergsson
Anasuya Das
Kyla Nemitz
Jenna Sandker
Stephen King
Alexander S. Ecker
Leon A. Gatys
Matthias Bethge
Jordan Boyd-Graber
Shi Feng
Pedro Rodriguez
Mohit Iyyer
He He
Hal Daumé III
Sean McGregor
Amir Banifatemi
Alexey Kurakin
Ian G Goodfellow
The First Conversational Intelligence Challenge
Mikhail Burtsev
Varvara Logacheva
Valentin Malykh
Iulian V. Serban
Shrimai Prabhumoye
Alan W. Black
Alexander Rudnicky
World Knowledge for Reading Comprehension: Rare Entity Prediction with Hierarchical LSTMs Using External Descriptions
Teng Long
Jackie CK Cheung
Humans interpret texts with respect to some background information, or world knowledge, and we would like to develop automatic reading compr… (voir plus)ehension systems that can do the same. In this paper, we introduce a task and several models to drive progress towards this goal. In particular, we propose the task of rare entity prediction: given a web document with several entities removed, models are tasked with predicting the correct missing entities conditioned on the document context and the lexical resources. This task is challenging due to the diversity of language styles and the extremely large number of rare entities. We propose two recurrent neural network architectures which make use of external knowledge in the form of entity descriptions. Our experiments show that our hierarchical LSTM model performs significantly better at the rare entity prediction task than those that do not make use of external resources.
A Hierarchical Latent Variable Encoder-Decoder Model for Generating Dialogues
Sequential data often possesses a hierarchical structure with complex dependencies between subsequences, such as found between the utterance… (voir plus)s in a dialogue. In an effort to model this kind of generative process, we propose a neural network-based generative architecture, with latent stochastic variables that span a variable number of time steps. We apply the proposed model to the task of dialogue response generation and compare it with recent neural network architectures. We evaluate the model performance through automatic evaluation metrics and by carrying out a human evaluation. The experiments demonstrate that our model improves upon recently proposed models and that the latent variables facilitate the generation of long outputs and maintain the context.
Training End-to-End Dialogue Systems with the Ubuntu Dialogue Corpus
Iulian Vlad Serban
Chia-Wei Liu
In this paper, we construct and train end-to-end neural network-based dialogue systems usingan updated version of the recent Ubuntu Dialogue… (voir plus) Corpus, a dataset containing almost 1 million multi-turn dialogues, with a total of over 7 million utterances and 100 million words. This dataset is interesting because of its size, long context lengths, and technical nature; thus, it can be used to train large models directly from data with minimal feature engineering, which can be both time consuming and expensive. We provide baselines  in two different environments: one where models are trained to maximize the log-likelihood of a generated utterance  conditioned on the context of the conversation, and one where models are trained to select the correct next response from a list of candidate responses. These are both evaluated on a recall task that we call Next Utterance Classification (NUC), as well as other generation-specific metrics. Finally, we provide a qualitative error analysis to help determine the most promising directions for future research on the Ubuntu  Dialogue Corpus, and for end-to-end dialogue systems in general.
An Actor-Critic Algorithm for Sequence Prediction
We present an approach to training neural networks to generate sequences using actor-critic methods from reinforcement learning (RL). Curren… (voir plus)t log-likelihood training methods are limited by the discrepancy between their training and testing modes, as models must generate tokens conditioned on their previous guesses rather than the ground-truth tokens. We address this problem by introducing a \textit{critic} network that is trained to predict the value of an output token, given the policy of an \textit{actor} network. This results in a training procedure that is much closer to the test phase, and allows us to directly optimize for a task-specific score such as BLEU. Crucially, since we leverage these techniques in the supervised learning setting rather than the traditional RL setting, we condition the critic network on the ground-truth output. We show that our method leads to improved performance on both a synthetic task, and for German-English machine translation. Our analysis paves the way for such methods to be applied in natural language generation tasks, such as machine translation, caption generation, and dialogue modelling.