Impact of Aliasing on Generalization in Deep Convolutional Networks
Cristina Vasconcelos
Vincent Dumoulin
Rob Romijnders
We investigate the impact of aliasing on generalization in Deep Convolutional Networks and show that data augmentation schemes alone are una… (voir plus)ble to prevent it due to structural limitations in widely used architectures. Drawing insights from frequency analysis theory, we take a closer look at ResNet and EfficientNet architectures and review the trade-off between aliasing and information loss in each of their major components. We show how to mitigate aliasing by inserting non-trainable low-pass filters at key locations, particularly where networks lack the capacity to learn them. These simple architectural changes lead to substantial improvements in generalization on i.i.d. and even more on out-of-distribution conditions, such as image classification under natural corruptions on ImageNet-C [11] and few-shot learning on Meta-Dataset [26]. State-of-the art results are achieved on both datasets without introducing additional trainable parameters and using the default hyper-parameters of open source codebases.
GPU acceleration of finite state machine input execution: Improving scale and performance
Vanya Yaneva
Ajitha Rajan
Model‐based development is a popular development approach in which software is implemented and verified based on a model of the required s… (voir plus)ystem. Finite state machines (FSMs) are widely used as models for systems in several domains. Validating that a model accurately represents the required behaviour involves the generation and execution of a large number of input sequences, which is often an expensive and time‐consuming process. In this paper, we speed up the execution of input sequences for FSM validation, by leveraging the high degree of parallelism of modern graphics processing units (GPUs) for the automatic execution of FSM input sequences in parallel on the GPU threads. We expand our existing work by providing techniques that improve the performance and scalability of this approach. We conduct extensive empirical evaluation using 15 large FSMs from the networking domain and measure GPU speed‐up over a 16‐core CPU, taking into account total GPU time, which includes both data transfer and kernel execution time. We found that GPUs execute FSM input sequences up to 9.28× faster than a 16‐core CPU, with an average speed‐up of 4.53× across all subjects. Our optimizations achieve an average improvement over existing work of 58.95% for speed‐up and scalability to large FSMs with over 2K states and 500K transitions. We also found that techniques aimed at reducing the number of required input sequences for large FSMs with high density were ineffective when applied to all‐transition pair coverage, thus emphasizing the need for approaches like ours that speed up input execution.
GPU acceleration of finite state machine input execution: Improving scale and performance
Vanya Yaneva
Ajitha Rajan
Model‐based development is a popular development approach in which software is implemented and verified based on a model of the required s… (voir plus)ystem. Finite state machines (FSMs) are widely used as models for systems in several domains. Validating that a model accurately represents the required behaviour involves the generation and execution of a large number of input sequences, which is often an expensive and time‐consuming process. In this paper, we speed up the execution of input sequences for FSM validation, by leveraging the high degree of parallelism of modern graphics processing units (GPUs) for the automatic execution of FSM input sequences in parallel on the GPU threads. We expand our existing work by providing techniques that improve the performance and scalability of this approach. We conduct extensive empirical evaluation using 15 large FSMs from the networking domain and measure GPU speed‐up over a 16‐core CPU, taking into account total GPU time, which includes both data transfer and kernel execution time. We found that GPUs execute FSM input sequences up to 9.28× faster than a 16‐core CPU, with an average speed‐up of 4.53× across all subjects. Our optimizations achieve an average improvement over existing work of 58.95% for speed‐up and scalability to large FSMs with over 2K states and 500K transitions. We also found that techniques aimed at reducing the number of required input sequences for large FSMs with high density were ineffective when applied to all‐transition pair coverage, thus emphasizing the need for approaches like ours that speed up input execution.
Capacity Planning in Stable Matching
Federico Bobbio
Andrea Lodi
Ignacio Rios
Alfredo Torrico
We introduce the problem of jointly increasing school capacities and finding a student-optimal assignment in the expanded market. Due to the… (voir plus) impossibility of efficiently solving the problem with classical methods, we generalize existent mathematical programming formulations of stability constraints to our setting, most of which result in integer quadratically-constrained programs. In addition, we propose a novel mixed-integer linear programming formulation that is exponentially large on the problem size. We show that its stability constraints can be separated by exploiting the objective function, leading to an effective cutting-plane algorithm. We conclude the theoretical analysis of the problem by discussing some mechanism properties. On the computational side, we evaluate the performance of our approaches in a detailed study, and we find that our cutting-plane method outperforms our generalization of existing mixed-integer approaches. We also propose two heuristics that are effective for large instances of the problem. Finally, we use the Chilean school choice system data to demonstrate the impact of capacity planning under stability conditions. Our results show that each additional seat can benefit multiple students and that we can effectively target the assignment of previously unassigned students or improve the assignment of several students through improvement chains. These insights empower the decision-maker in tuning the matching algorithm to provide a fair application-oriented solution.
Inter-Brain Synchronization: From Neurobehavioral Correlation to Causal Explanation
Small, correlated changes in synaptic connectivity may facilitate rapid motor learning
Barbara Feulner
Raeed H. Chowdhury
Lee Miller
Juan A. Gallego
Claudia Clopath
THE EFFECT SIZE OF GENES ON COGNITIVE ABILITIES IS LINKED TO THEIR EXPRESSION ALONG THE MAJOR HIERARCHICAL GRADIENT IN THE HUMAN BRAIN
Sébastien Jacquemont
Guillaume Huguet
Elise Douard
Zohra Saci
Laura Almasy
David C. Glahn
Trade-off Between Accuracy and Fairness of Data-driven Building and Indoor Environment Models: A Comparative Study of Pre-processing Methods
Ying Sun
Fariborz Haghighat
Trade-off Between Accuracy and Fairness of Data-driven Building and Indoor Environment Models: A Comparative Study of Pre-processing Methods
Ying Sun
Fariborz Haghighat
Transfer functions: learning about a lagged exposure-outcome association in time-series data
Hiroshi Mamiya
Alexandra M. Schmidt
Erica E. M. Moodie
Many population exposures in time-series analysis, including food marketing, exhibit a time-lagged association with population health outcom… (voir plus)es such as food purchasing. A common approach to measuring patterns of associations over different time lags relies on a finite-lag model, which requires correct specification of the maximum duration over which the lagged association extends. However, the maximum lag is frequently unknown due to the lack of substantive knowledge or the geographic variation of lag length. We describe a time-series analytical approach based on an infinite lag specification under a transfer function model that avoids the specification of an arbitrary maximum lag length. We demonstrate its application to estimate the lagged exposure-outcome association in food environmental research: display promotion of sugary beverages with lagged sales.
Graph Neural Networks in Natural Language Processing
Lingfei Wu
Natural language processing (NLP) and understanding aim to read from unformatted text to accomplish different tasks. While word embeddings l… (voir plus)earned by deep neural networks are widely used, the underlying linguistic and semantic structures of text pieces cannot be fully exploited in these representations. Graph is a natural way to capture the connections between different text pieces, such as entities, sentences, and documents. To overcome the limits in vector space models, researchers combine deep learning models with graph-structured representations for various tasks in NLP and text mining. Such combinations help to make full use of both the structural information in text and the representation learning ability of deep neural networks. In this chapter, we introduce the various graph representations that are extensively used in NLP, and show how different NLP tasks can be tackled from a graph perspective. We summarize recent research works on graph-based NLP, and discuss two case studies related to graph-based text clustering, matching, and multihop machine reading comprehension in detail. Finally, we provide a synthesis about the important open problems of this subfield.
Data-driven approaches for genetic characterization of SARS-CoV-2 lineages
Fatima Mostefai
Isabel Gamache
Jessie Huang
Arnaud N’Guessan
Justin Pelletier
Ahmad Pesaranghader
David J. Hamelin
Carmen Lia Murall
Raphael Poujol
Jean-Christophe Grenier
Martin Smith
Etienne Caron
Morgan Craig
Jesse Shapiro
The genome of the Severe Acute Respiratory Syndrome coronavirus 2 (SARS-CoV-2), the pathogen that causes coronavirus disease 2019 (COVID-19)… (voir plus), has been sequenced at an unprecedented scale, leading to a tremendous amount of viral genome sequencing data. To understand the evolution of this virus in humans, and to assist in tracing infection pathways and designing preventive strategies, we present a set of computational tools that span phylogenomics, population genetics and machine learning approaches. To illustrate the utility of this toolbox, we detail an in depth analysis of the genetic diversity of SARS-CoV-2 in first year of the COVID-19 pandemic, using 329,854 high-quality consensus sequences published in the GISAID database during the pre-vaccination phase. We demonstrate that, compared to standard phylogenetic approaches, haplotype networks can be computed efficiently on much larger datasets, enabling real-time analyses. Furthermore, time series change of Tajima’s D provides a powerful metric of population expansion. Unsupervised learning techniques further highlight key steps in variant detection and facilitate the study of the role of this genomic variation in the context of SARS-CoV-2 infection, with Multiscale PHATE methodology identifying fine-scale structure in the SARS-CoV-2 genetic data that underlies the emergence of key lineages. The computational framework presented here is useful for real-time genomic surveillance of SARS-CoV-2 and could be applied to any pathogen that threatens the health of worldwide populations of humans and other organisms.