Publications

Predicting Success in Goal-Driven Human-Human Dialogues
Michael Noseworthy
Jackie CK Cheung
In goal-driven dialogue systems, success is often defined based on a structured definition of the goal. This requires that the dialogue syst… (see more)em be constrained to handle a specific class of goals and that there be a mechanism to measure success with respect to that goal. However, in many human-human dialogues the diversity of goals makes it infeasible to define success in such a way. To address this scenario, we consider the task of automatically predicting success in goal-driven human-human dialogues using only the information communicated between participants in the form of text. We build a dataset from stackoverflow.com which consists of exchanges between two users in the technical domain where ground-truth success labels are available. We then propose a turn-based hierarchical neural network model that can be used to predict success without requiring a structured goal definition. We show this model outperforms rule-based heuristics and other baselines as it is able to detect patterns over the course of a dialogue and capture notions such as gratitude.
Self-organized Hierarchical Softmax
We propose a new self-organizing hierarchical softmax formulation for neural-network-based language models over large vocabularies. Instead … (see more)of using a predefined hierarchical structure, our approach is capable of learning word clusters with clear syntactical and semantic meaning during the language model training process. We provide experiments on standard benchmarks for language modeling and sentence compression tasks. We find that this approach is as fast as other efficient softmax approximations, while achieving comparable or even better performance relative to similar full softmax models.
A dataset and exploration of models for understanding video data through fill-in-the-blank question-answering
Anna Rohrbach
Christopher Pal
While deep convolutional neural networks frequently approach or exceed human-level performance at benchmark tasks involving static images, e… (see more)xtending this success to moving images is not straightforward. Having models which can learn to understand video is of interest for many applications, including content recommendation, prediction, summarization, event/object detection and understanding human visual perception, but many domains lack sufficient data to explore and perfect video models. In order to address the need for a simple, quantitative benchmark for developing and understanding video, we present MovieFIB, a fill-in-the-blank question-answering dataset with over 300,000 examples, based on descriptive video annotations for the visually impaired. In addition to presenting statistics and a description of the dataset, we perform a detailed analysis of 5 different models' predictions, and compare these with human performance. We investigate the relative importance of language, static (2D) visual features, and moving (3D) visual features; the effects of increasing dataset size, the number of frames sampled; and of vocabulary size. We illustrate that: this task is not solvable by a language model alone; our model combining 2D and 3D visual information indeed provides the best result; all models perform significantly worse than human-level. We provide human evaluations for responses given by different models and find that accuracy on the MovieFIB evaluation corresponds well with human judgement. We suggest avenues for improving video models, and hope that the proposed dataset can be useful for measuring and encouraging progress in this very interesting field.
GuessWhat?! Visual Object Discovery through Multi-modal Dialogue
We introduce GuessWhat?!, a two-player guessing game as a testbed for research on the interplay of computer vision and dialogue systems. The… (see more) goal of the game is to locate an unknown object in a rich image scene by asking a sequence of questions. Higher-level image understanding, like spatial reasoning and language grounding, is required to solve the proposed task. Our key contribution is the collection of a large-scale dataset consisting of 150K human-played games with a total of 800K visual question-answer pairs on 66K images. We explain our design decisions in collecting the dataset and introduce the oracle and questioner tasks that are associated with the two players of the game. We prototyped deep learning models to establish initial baselines of the introduced tasks.
A Closer Look at Memorization in Deep Networks
We examine the role of memorization in deep learning, drawing connections to capacity, generalization, and adversarial robustness. While dee… (see more)p networks are capable of memorizing noise data, our results suggest that they tend to prioritize learning simple patterns first. In our experiments, we expose qualitative differences in gradient-based optimization of deep neural networks (DNNs) on noise vs. real data. We also demonstrate that for appropriately tuned explicit regularization (e.g., dropout) we can degrade DNN training performance on noise datasets without compromising generalization on real data. Our analysis suggests that the notions of effective capacity which are dataset independent are unlikely to explain the generalization performance of deep networks when trained with gradient based methods because training data itself plays an important role in determining the degree of memorization.
Prediction of Extubation readiness in extremely preterm infants by the automated analysis of cardiorespiratory behavior: study protocol
Wissam Shalish
Lara J. Kanbar
Smita Rao
Carlos A. Robles-Rubio
Lajos Kovacs
Sanjay Chawla
Martin Keszler
Karen Brown
Robert E. Kearney
Guilherme M. Sant’Anna
BackgroundExtremely preterm infants (≤ 28 weeks gestation) commonly require endotracheal intubation and mechanical ventilation (MV) to ma… (see more)intain adequate oxygenation and gas exchange. Given that MV is independently associated with important adverse outcomes, efforts should be made to limit its duration. However, current methods for determining extubation readiness are inaccurate and a significant number of infants fail extubation and require reintubation, an intervention that may be associated with increased morbidities. A variety of objective measures have been proposed to better define the optimal time for extubation, but none have proven clinically useful. In a pilot study, investigators from this group have shown promising results from sophisticated, automated analyses of cardiorespiratory signals as a predictor of extubation readiness. The aim of this study is to develop an automated predictor of extubation readiness using a combination of clinical tools along with novel and automated measures of cardiorespiratory behavior, to assist clinicians in determining when extremely preterm infants are ready for extubation.MethodsIn this prospective, multicenter observational study, cardiorespiratory signals will be recorded from 250 eligible extremely preterm infants with birth weights ≤1250 g immediately prior to their first planned extubation. Automated signal analysis algorithms will compute a variety of metrics for each infant, and machine learning methods will then be used to find the optimal combination of these metrics together with clinical variables that provide the best overall prediction of extubation readiness. Using these results, investigators will develop an Automated system for Prediction of EXtubation (APEX) readiness that will integrate the software for data acquisition, signal analysis, and outcome prediction into a single application suitable for use by medical personnel in the neonatal intensive care unit. The performance of APEX will later be prospectively validated in 50 additional infants.DiscussionThe results of this research will provide the quantitative evidence needed to assist clinicians in determining when to extubate a preterm infant with the highest probability of success, and could produce significant improvements in extubation outcomes in this population.Trial registrationClinicaltrials.gov identifier: NCT01909947. Registered on July 17 2013.Trial sponsor: Canadian Institutes of Health Research (CIHR).
Unimodal Probability Distributions for Deep Ordinal Classification
Christopher Pal
Probability distributions produced by the cross-entropy loss for ordinal classification problems can possess undesired properties. We propos… (see more)e a straightforward technique to constrain discrete ordinal probability distributions to be unimodal via a combination of the Poisson probability mass function and the softmax nonlinearity. We evaluate this approach on two large ordinal image datasets and obtain promising results.
A Semi-Markov Chain Approach to Modeling Respiratory Patterns Prior to Extubation in Preterm Infants
Charles C. Onu
Lara J. Kanbar
Wissam Shalish
Karen A. Brown
Guilherme M. Sant'Anna
Robert E. Kearney
After birth, extremely preterm infants often require specialized respiratory management in the form of invasive mechanical ventilation (IMV)… (see more). Protracted IMV is associated with detrimental outcomes and morbidities. Premature extubation, on the other hand, would necessitate reintubation which is risky, technically challenging and could further lead to lung injury or disease. We present an approach to modeling respiratory patterns of infants who succeeded extubation and those who required reintubation which relies on Markov models. We compare the use of traditional Markov chains to semi-Markov models which emphasize cross-pattern transitions and timing information, and to multi-chain Markov models which can concisely represent non-stationarity in respiratory behavior over time. The models we developed expose specific, unique similarities as well as vital differences between the two populations.
Multiscale sequence modeling with a learned dictionary
We propose a generalization of neural network sequence models. Instead of predicting one symbol at a time, our multi-scale model makes predi… (see more)ctions over multiple, potentially overlapping multi-symbol tokens. A variation of the byte-pair encoding (BPE) compression algorithm is used to learn the dictionary of tokens that the model is trained with. When applied to language modelling, our model has the flexibility of character-level models while maintaining many of the performance benefits of word-level models. Our experiments show that this model performs better than a regular LSTM on language modeling tasks, especially for smaller models.
Detecting Large Concept Extensions for Conceptual Analysis
L. Chartrand
Jackie CK Cheung
Mohamed Bouguessa
Variance Regularizing Adversarial Learning
R Devon Hjelm
We study how, in generative adversarial networks, variance in the discriminator's output affects the generator's ability to learn the data d… (see more)istribution. In particular, we contrast the results from various well-known techniques for training GANs when the discriminator is near-optimal and updated multiple times per update to the generator. As an alternative, we propose an additional method to train GANs by explicitly modeling the discriminator's output as a bi-modal Gaussian distribution over the real/fake indicator variables. In order to do this, we train the Gaussian classifier to match the target bi-modal distribution implicitly through meta-adversarial training. We observe that our new method, when trained together with a strong discriminator, provides meaningful, non-vanishing gradients.
APEX_SCOPE: A graphical user interface for visualization of multi-modal data in inter-disciplinary studies.
Lara J. Kanbar
Wissam Shalish
Karen A. Brown
Guilherme M. Sant'Anna
Robert E. Kearney