SST'19 - Software and Systems Traceability
Jan-Philipp Steghöfer
Nan Niu
Anas Mahmoud
Traceability is the ability to relate di erent artifacts during the development and operation of a system to each other. It enables program … (voir plus)comprehension, change impact analysis, and facilitates the cooperation of engineers from di erent disciplines. The 10th International Workshop on Software and Systems Traceability (former International Workshop on Traceability in Emerging Forms of Software Engineering, TEFSE), explored the role and impact of traceability in modern software and systems development. The event brought together researchers and practitioners to examine the challenges of recovering, maintaining, and utilizing traceability for the myriad forms of software and systems engineering artifacts. SST'19 was a highly interactive working event focused on discussing the main problems related to software traceability in particular in the context of opportunities and challenges posed by the recent progress in Arti cial Intelligence techniques and proposing possible solutions for such problems.
Fractal impedance for passive controllers: a framework for interaction robotics
Keyhan Kouhkiloui Babarahmati
Carlo Tiseo
Joshua Smith
Hsiu‐chin Lin
M. S. Erden
Michael Nalin Mistry
Defining ‘actionable’ high- costhealth care use: results using the Canadian Institute for Health Information population grouping methodology
Maureen Anderson
Crawford W. Revie
Henrik Stryhn
Cordell Neudorf
Yvonne Rosehart
Wenbin Li
Meriç Osman
Laura C. Rosella
Walter P. Wodchis
Preventing Posterior Collapse in Sequence VAEs with Pooling
Teng Long
Yanshuai Cao
Variational Autoencoders (VAEs) hold great potential for modelling text, as they could in theory separate high-level semantic and syntactic … (voir plus)properties from local regularities of natural language. Practically, however, VAEs with autoregressive decoders often suffer from posterior collapse, a phenomenon where the model learns to ignore the latent variables, causing the sequence VAE to degenerate into a language model. Previous works attempt to solve this problem with complex architectural changes or costly optimization schemes. In this paper, we argue that posterior collapse is caused in part by the encoder network failing to capture the input variabilities. We verify this hypothesis empirically and propose a straightforward fix using pooling. This simple technique effectively prevents posterior collapse, allowing the model to achieve significantly better data log-likelihood than standard sequence VAEs. Compared to the previous SOTA on preventing posterior collapse, we are able to achieve comparable performances while being significantly faster.
Adversarial target-invariant representation learning for domain generalization
Isabela Albuquerque
Joao Monteiro
Tiago Falk
In many applications of machine learning, the training and test set data come from different distributions, or domains. A number of domain g… (voir plus)eneralization strategies have been introduced with the goal of achieving good performance on out-of-distribution data. In this paper, we propose an adversarial approach to the problem. We propose a process that enforces pair-wise domain invariance while training a feature extractor over a diverse set of domains. We show that this process ensures invariance to any distribution that can be expressed as a mixture of the training domains. Following this insight, we then introduce an adversarial approach in which pair-wise divergences are estimated and minimized. Experiments on two domain generalization benchmarks for object recognition (i.e., PACS and VLCS) show that the proposed method yields higher average accuracy on the target domains in comparison to previously introduced adversarial strategies, as well as recently proposed methods based on learning invariant representations.
Generalizing to unseen domains via distribution matching
Isabela Albuquerque
Joao Monteiro
Mohammad Javad Darvishi Bayazi
Tiago Falk
Supervised learning results typically rely on assumptions of i.i.d. data. Unfortunately, those assumptions are commonly violated in practice… (voir plus). In this work, we tackle this problem by focusing on domain generalization: a formalization where the data generating process at test time may yield samples from never-before-seen domains (distributions). Our work relies on a simple lemma: by minimizing a notion of discrepancy between all pairs from a set of given domains, we also minimize the discrepancy between any pairs of mixtures of domains. Using this result, we derive a generalization bound for our setting. We then show that low risk over unseen domains can be achieved by representing the data in a space where (i) the training distributions are indistinguishable, and (ii) relevant information for the task at hand is preserved. Minimizing the terms in our bound yields an adversarial formulation which estimates and minimizes pairwise discrepancies. We validate our proposed strategy on standard domain generalization benchmarks, outperforming a number of recently introduced methods. Notably, we tackle a real-world application where the underlying data corresponds to multi-channel electroencephalography time series from different subjects, each considered as a distinct domain.
Tell, Draw, and Repeat: Generating and Modifying Images Based on Continual Linguistic Instruction
Alaaeldin El-Nouby
Shikhar Sharma
Hannes Schulz
Layla El Asri
Graham W. Taylor
Conditional text-to-image generation is an active area of research, with many possible applications. Existing research has primarily focused… (voir plus) on generating a single image from available conditioning information in one step. One practical extension beyond one-step generation is a system that generates an image iteratively, conditioned on ongoing linguistic input or feedback. This is significantly more challenging than one-step generation tasks, as such a system must understand the contents of its generated images with respect to the feedback history, the current feedback, as well as the interactions among concepts present in the feedback history. In this work, we present a recurrent image generation model which takes into account both the generated output up to the current step as well as all past instructions for generation. We show that our model is able to generate the background, add new objects, and apply simple transformations to existing objects. We believe our approach is an important step toward interactive generation. Code and data is available at: https://www.microsoft.com/en-us/research/project/generative-neural-visual-artist-geneva/.
Can a Gorilla Ride a Camel? Learning Semantic Plausibility from Text
Ian Porada
Kaheer Suleman
CoQA: A Conversational Question Answering Challenge
Danqi Chen
Christopher D Manning
Humans gather information through conversations involving a series of interconnected questions and answers. For machines to assist in inform… (voir plus)ation gathering, it is therefore essential to enable them to answer conversational questions. We introduce CoQA, a novel dataset for building Conversational Question Answering systems. Our dataset contains 127k questions with answers, obtained from 8k conversations about text passages from seven diverse domains. The questions are conversational, and the answers are free-form text with their corresponding evidence highlighted in the passage. We analyze CoQA in depth and show that conversational questions have challenging phenomena not present in existing reading comprehension datasets (e.g., coreference and pragmatic reasoning). We evaluate strong dialogue and reading comprehension models on CoQA. The best system obtains an F1 score of 65.4%, which is 23.4 points behind human performance (88.8%), indicating that there is ample room for improvement. We present CoQA as a challenge to the community at https://stanfordnlp.github.io/coqa.
Countering the Effects of Lead Bias in News Summarization via Multi-Stage Training and Auxiliary Losses
Matt Grenander
Yue Dong
Annie Priyadarshini Louis
Sentence position is a strong feature for news summarization, since the lead often (but not always) summarizes the key points of the article… (voir plus). In this paper, we show that recent neural systems excessively exploit this trend, which although powerful for many inputs, is also detrimental when summarizing documents where important content should be extracted from later parts of the article. We propose two techniques to make systems sensitive to the importance of content in different parts of the article. The first technique employs ‘unbiased’ data; i.e., randomly shuffled sentences of the source document, to pretrain the model. The second technique uses an auxiliary ROUGE-based loss that encourages the model to distribute importance scores throughout a document by mimicking sentence-level ROUGE scores on the training data. We show that these techniques significantly improve the performance of a competitive reinforcement learning based extractive system, with the auxiliary loss being more powerful than pretraining.
Efficient Planning under Partial Observability with Unnormalized Q Functions and Spectral Learning
Learning and planning in partially-observable domains is one of the most difficult problems in reinforcement learning. Traditional methods c… (voir plus)onsider these two problems as independent, resulting in a classical two-stage paradigm: first learn the environment dynamics and then plan accordingly. This approach, however, disconnects the two problems and can consequently lead to algorithms that are sample inefficient and time consuming. In this paper, we propose a novel algorithm that combines learning and planning together. Our algorithm is closely related to the spectral learning algorithm for predicitive state representations and offers appealing theoretical guarantees and time complexity. We empirically show on two domains that our approach is more sample and time efficient compared to classical methods.
How Reasonable are Common-Sense Reasoning Tasks: A Case-Study on the Winograd Schema Challenge and SWAG
Paul Trichelair
Ali Emami
Adam Trischler
Kaheer Suleman
Recent studies have significantly improved the state-of-the-art on common-sense reasoning (CSR) benchmarks like the Winograd Schema Challeng… (voir plus)e (WSC) and SWAG. The question we ask in this paper is whether improved performance on these benchmarks represents genuine progress towards common-sense-enabled systems. We make case studies of both benchmarks and design protocols that clarify and qualify the results of previous work by analyzing threats to the validity of previous experimental designs. Our protocols account for several properties prevalent in common-sense benchmarks including size limitations, structural regularities, and variable instance difficulty.