Diagnosing as autistic people increasingly distant from prototypes lead neither to clinical benefit nor to the advancement of knowledge
Laurent Mottron
Systematic Evaluation of Causal Discovery in Visual Model Based Reinforcement Learning
Nan Rosemary Ke
Aniket Rajiv Didolkar
Sarthak Mittal
Anirudh Goyal
Stefan Bauer
Danilo Jimenez Rezende
Michael Curtis Mozer
Inducing causal relationships from observations is a classic problem in machine learning. Most work in causality starts from the premise tha… (voir plus)t the causal variables themselves are observed. However, for AI agents such as robots trying to make sense of their environment, the only observables are low-level variables like pixels in images. To generalize well, an agent must induce high-level variables, particularly those which are causal or are affected by causal variables. A central goal for AI and causality is thus the joint discovery of abstract representations and causal structure. However, we note that existing environments for studying causal induction are poorly suited for this objective because they have complicated task-specific causal graphs which are impossible to manipulate parametrically (e.g., number of nodes, sparsity, causal chain length, etc.). In this work, our goal is to facilitate research in learning representations of high-level variables as well as causal structures among them. In order to systematically probe the ability of methods to identify these variables and structures, we design a suite of benchmarking RL environments. We evaluate various representation learning algorithms from the literature and find that explicitly incorporating structure and modularity in models can help causal induction in model-based reinforcement learning.
Beyond Trivial Counterfactual Explanations with Diverse Valuable Explanations
Pau Rodriguez
Massimo Caccia
Alexandre Lacoste
Lee Zamparo
Issam Hadj Laradji
David Vazquez
Explainability for machine learning models has gained considerable attention within the research community given the importance of deploying… (voir plus) more reliable machine-learning systems. In computer vision applications, generative counterfactual methods indicate how to perturb a model’s input to change its prediction, providing details about the model’s decision-making. Current methods tend to generate trivial counterfactuals about a model’s decisions, as they often suggest to exaggerate or remove the presence of the attribute being classified. For the machine learning practitioner, these types of counterfactuals offer little value, since they provide no new information about undesired model or data biases. In this work, we identify the problem of trivial counterfactual generation and we propose DiVE to alleviate it. DiVE learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss to uncover multiple valuable explanations about the model’s prediction. Further, we introduce a mechanism to prevent the model from producing trivial explanations. Experiments on CelebA and Synbols demonstrate that our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods. Code is available at https://github.com/ElementAI/beyond-trivial-explanations.
DoMoBOT: An AI-Empowered Bot for Automated and Interactive Domain Modelling
Rijul Saini
Gunter Mussbacher
Jörg Kienzle
Domain modelling transforms informal requirements written in natural language in the form of problem descriptions into concise and analyzabl… (voir plus)e domain models. As the manual construction of these domain models is often time-consuming, error-prone, and labor-intensive, several approaches already exist to automate domain modelling. However, the current approaches suffer from lower accuracy of extracted domain models and the lack of support for system-modeller interactions. To better assist modellers, we introduce DoMoBOT, a web-based Domain Modelling BOT. Our proposed bot combines artificial intelligence techniques such as natural language processing and machine learning to extract domain models with higher accuracy. More importantly, our bot incorporates a set of features to bring synergy between automated model extraction and bot-modeller interactions. During these interactions, the bot presents multiple possible solutions to a modeller for modelling scenarios present in a given problem description. The bot further enables modellers to switch to a particular solution and updates the other parts of the domain model proactively. In this tool demo paper, we demonstrate how the implementation and architecture of DoMoBOT support the paradigm of automated and interactive domain modelling for assisting modellers.
Generative Compositional Augmentations for Scene Graph Prediction
Boris Knyazev
Harm de Vries
Cătălina Cangea
Graham W. Taylor
Inferring objects and their relationships from an image in the form of a scene graph is useful in many applications at the intersection of v… (voir plus)ision and language. We consider a challenging problem of compositional generalization that emerges in this task due to a long tail data distribution. Current scene graph generation models are trained on a tiny fraction of the distribution corresponding to the most frequent compositions, e.g. . However, test images might contain zero- and few-shot compositions of objects and relationships, e.g. . Despite each of the object categories and the predicate (e.g. ‘on’) being frequent in the training data, the models often fail to properly understand such unseen or rare compositions. To improve generalization, it is natural to attempt increasing the diversity of the training distribution. However, in the graph domain this is non-trivial. To that end, we propose a method to synthesize rare yet plausible scene graphs by perturbing real ones. We then propose and empirically study a model based on conditional generative adversarial networks (GANs) that allows us to generate visual features of perturbed scene graphs and learn from them in a joint fashion. When evaluated on the Visual Genome dataset, our approach yields marginal, but consistent improvements in zero- and few-shot metrics. We analyze the limitations of our approach indicating promising directions for future research.
Impact of Aliasing on Generalization in Deep Convolutional Networks
Cristina Vasconcelos
Vincent Dumoulin
Rob Romijnders
We investigate the impact of aliasing on generalization in Deep Convolutional Networks and show that data augmentation schemes alone are una… (voir plus)ble to prevent it due to structural limitations in widely used architectures. Drawing insights from frequency analysis theory, we take a closer look at ResNet and EfficientNet architectures and review the trade-off between aliasing and information loss in each of their major components. We show how to mitigate aliasing by inserting non-trainable low-pass filters at key locations, particularly where networks lack the capacity to learn them. These simple architectural changes lead to substantial improvements in generalization on i.i.d. and even more on out-of-distribution conditions, such as image classification under natural corruptions on ImageNet-C [11] and few-shot learning on Meta-Dataset [26]. State-of-the art results are achieved on both datasets without introducing additional trainable parameters and using the default hyper-parameters of open source codebases.
GPU acceleration of finite state machine input execution: Improving scale and performance
Vanya Yaneva
Ajitha Rajan
Model‐based development is a popular development approach in which software is implemented and verified based on a model of the required s… (voir plus)ystem. Finite state machines (FSMs) are widely used as models for systems in several domains. Validating that a model accurately represents the required behaviour involves the generation and execution of a large number of input sequences, which is often an expensive and time‐consuming process. In this paper, we speed up the execution of input sequences for FSM validation, by leveraging the high degree of parallelism of modern graphics processing units (GPUs) for the automatic execution of FSM input sequences in parallel on the GPU threads. We expand our existing work by providing techniques that improve the performance and scalability of this approach. We conduct extensive empirical evaluation using 15 large FSMs from the networking domain and measure GPU speed‐up over a 16‐core CPU, taking into account total GPU time, which includes both data transfer and kernel execution time. We found that GPUs execute FSM input sequences up to 9.28× faster than a 16‐core CPU, with an average speed‐up of 4.53× across all subjects. Our optimizations achieve an average improvement over existing work of 58.95% for speed‐up and scalability to large FSMs with over 2K states and 500K transitions. We also found that techniques aimed at reducing the number of required input sequences for large FSMs with high density were ineffective when applied to all‐transition pair coverage, thus emphasizing the need for approaches like ours that speed up input execution.
GPU acceleration of finite state machine input execution: Improving scale and performance
Vanya Yaneva
Ajitha Rajan
Model‐based development is a popular development approach in which software is implemented and verified based on a model of the required s… (voir plus)ystem. Finite state machines (FSMs) are widely used as models for systems in several domains. Validating that a model accurately represents the required behaviour involves the generation and execution of a large number of input sequences, which is often an expensive and time‐consuming process. In this paper, we speed up the execution of input sequences for FSM validation, by leveraging the high degree of parallelism of modern graphics processing units (GPUs) for the automatic execution of FSM input sequences in parallel on the GPU threads. We expand our existing work by providing techniques that improve the performance and scalability of this approach. We conduct extensive empirical evaluation using 15 large FSMs from the networking domain and measure GPU speed‐up over a 16‐core CPU, taking into account total GPU time, which includes both data transfer and kernel execution time. We found that GPUs execute FSM input sequences up to 9.28× faster than a 16‐core CPU, with an average speed‐up of 4.53× across all subjects. Our optimizations achieve an average improvement over existing work of 58.95% for speed‐up and scalability to large FSMs with over 2K states and 500K transitions. We also found that techniques aimed at reducing the number of required input sequences for large FSMs with high density were ineffective when applied to all‐transition pair coverage, thus emphasizing the need for approaches like ours that speed up input execution.
Action-Based Representation Learning for Autonomous Driving
Yi Xiao
Felipe Codevilla
Antonio M. López
Human drivers produce a vast amount of data which could, in principle, be used to improve autonomous driving systems. Unfortunately, seeming… (voir plus)ly straightforward approaches for creating end-to-end driving models that map sensor data directly into driving actions are problematic in terms of interpretability, and typically have significant difficulty dealing with spurious correlations. Alternatively, we propose to use this kind of action-based driving data for learning representations. Our experiments show that an affordance-based driving model pre-trained with this approach can leverage a relatively small amount of weakly annotated imagery and outperform pure end-to-end driving models, while being more interpretable. Further, we demonstrate how this strategy outperforms previous methods based on learning inverse dynamics models as well as other methods based on heavy human supervision (ImageNet).
Minimum detectable spinal cord atrophy with automatic segmentation: Investigations using an open-access dataset of healthy participants
Paul Bautin
Capacity Planning in Stable Matching
Federico Bobbio
Andrea Lodi
Ignacio Rios
Alfredo Torrico
We introduce the problem of jointly increasing school capacities and finding a student-optimal assignment in the expanded market. Due to the… (voir plus) impossibility of efficiently solving the problem with classical methods, we generalize existent mathematical programming formulations of stability constraints to our setting, most of which result in integer quadratically-constrained programs. In addition, we propose a novel mixed-integer linear programming formulation that is exponentially large on the problem size. We show that its stability constraints can be separated by exploiting the objective function, leading to an effective cutting-plane algorithm. We conclude the theoretical analysis of the problem by discussing some mechanism properties. On the computational side, we evaluate the performance of our approaches in a detailed study, and we find that our cutting-plane method outperforms our generalization of existing mixed-integer approaches. We also propose two heuristics that are effective for large instances of the problem. Finally, we use the Chilean school choice system data to demonstrate the impact of capacity planning under stability conditions. Our results show that each additional seat can benefit multiple students and that we can effectively target the assignment of previously unassigned students or improve the assignment of several students through improvement chains. These insights empower the decision-maker in tuning the matching algorithm to provide a fair application-oriented solution.
FloW: A Dataset and Benchmark for Floating Waste Detection in Inland Waters
Yuwei Cheng
Jiannan Zhu
Mengxin Jiang
Jie Fu
Changsong Pang
Peidong Wang
Kris Sankaran
Olawale Moses Onabola
Yimin Liu
Dianbo Liu
Marine debris is severely threatening the marine lives and causing sustained pollution to the whole ecosystem. To prevent the wastes from ge… (voir plus)tting into the ocean, it is helpful to clean up the floating wastes in inland waters using the autonomous cleaning devices like unmanned surface vehicles. The cleaning efficiency relies on a high-accurate and robust object detection system. However, the small size of the target, the strong light reflection over water surface, and the reflection of other objects on bank-side all bring challenges to the vision-based object detection system. To promote the practical application for autonomous floating wastes cleaning, we present FloW†, the first dataset for floating waste detection in inland water areas. The dataset consists of an image sub-dataset FloW-Img and a multimodal sub-dataset FloW-RI which contains synchronized millimeter wave radar data and images. Accurate annotations for images and radar data are provided, supporting floating waste detection strategies based on image, radar data, and the fusion of two sensors. We perform several baseline experiments on our dataset, including vision-based and radar-based detection methods. The results show that, the detection accuracy is relatively low and floating waste detection still remains a challenging task.