Publications

Realistically distributing object placements in synthetic training data improves the performance of vision-based object detection models
Setareh Dabiri
Vasileios Lioutas
Berend Zwartsenberg
Yunpeng Liu
Matthew Niedoba
Xiaoxuan Liang
Dylan Green
Justice Sefas
Jonathan Wilder Lavington
Adam Ścibior
When training object detection models on synthetic data, it is important to make the distribution of synthetic data as close as possible to … (voir plus)the distribution of real data. We investigate specifically the impact of object placement distribution, keeping all other aspects of synthetic data fixed. Our experiment, training a 3D vehicle detection model in CARLA and testing on KITTI, demonstrates a substantial improvement resulting from improving the object placement distribution.
Think Before You Act: Decision Transformers with Internal Working Memory
Jikun Kang
Romain Laroche
Xingdi Yuan
Adam P. Trischler
Xuefei Liu
Jie Fu
Large language model (LLM)-based decision-making agents have shown the ability to generalize across multiple tasks. However, their performan… (voir plus)ce relies on massive data and compute. We argue that this inefficiency stems from the forgetting phenomenon, in which a model memorizes its behaviors in parameters throughout training. As a result, training on a new task may deteriorate the model's performance on previous tasks. In contrast to LLMs' implicit memory mechanism, the human brain utilizes distributed memory storage, which helps manage and organize multiple skills efficiently, mitigating the forgetting phenomenon. Thus inspired, we propose an internal working memory module to store, blend, and retrieve information for different downstream tasks. Evaluation results show that the proposed method improves training efficiency and generalization in both Atari games and meta-world object manipulation tasks. Moreover, we demonstrate that memory fine-tuning further enhances the adaptability of the proposed architecture.
Think Before You Act: Decision Transformers with Internal Working Memory
Jikun Kang
Romain Laroche
Xingdi Yuan
Adam Trischler
Jie Fu
Large language model (LLM)-based decision-making agents have shown the ability to generalize across multiple tasks. However, their performan… (voir plus)ce relies on massive data and compute. We argue that this inefficiency stems from the forgetting phenomenon, in which a model memorizes its behaviors in parameters throughout training. As a result, training on a new task may deteriorate the model's performance on previous tasks. In contrast to LLMs' implicit memory mechanism, the human brain utilizes distributed memory storage, which helps manage and organize multiple skills efficiently, mitigating the forgetting phenomenon. Thus inspired, we propose an internal working memory module to store, blend, and retrieve information for different downstream tasks. Evaluation results show that the proposed method improves training efficiency and generalization in both Atari games and meta-world object manipulation tasks. Moreover, we demonstrate that memory fine-tuning further enhances the adaptability of the proposed architecture.
Fourier Neural Operators for Arbitrary Resolution Climate Data Downscaling
Qidong Yang
Alex Hernandez-Garcia
Paula Harder
Venkatesh Ramesh
Prasanna Sattegeri
D. Szwarcman
C. Watson
David Rolnick
Climate simulations are essential in guiding our understanding of climate change and responding to its effects. However, it is computational… (voir plus)ly expensive to resolve complex climate processes at high spatial resolution. As one way to speed up climate simulations, neural networks have been used to downscale climate variables from fast-running low-resolution simulations, but high-resolution training data are often unobtainable or scarce, greatly limiting accuracy. In this work, we propose a downscaling method based on the Fourier neural operator. It trains with data of a small upsampling factor and then can zero-shot downscale its input to arbitrary unseen high resolution. Evaluated both on ERA5 climate model data and on the Navier-Stokes equation solution data, our downscaling model significantly outperforms state-of-the-art convolutional and generative adversarial downscaling models, both in standard single-resolution downscaling and in zero-shot generalization to higher upsampling factors. Furthermore, we show that our method also outperforms state-of-the-art data-driven partial differential equation solvers on Navier-Stokes equations. Overall, our work bridges the gap between simulation of a physical process and interpolation of low-resolution output, showing that it is possible to combine both approaches and significantly improve upon each other.
Identifying Critical Neurons in ANN Architectures using Mixed Integer Programming
Mostafa ElAraby
Guy Wolf
Should We Attend More or Less? Modulating Attention for Fairness
A. Zayed
Goncalo Mordido
Samira Shabanian
Sarath Chandar
Data Imputation with an Autoencoder and MAGIC
Devin Eddington
Andres Felipe Duque Correa
Guy Wolf
Kevin R. Moon
Missing data is a common problem in many applications. Imputing missing values is a challenging task, as the imputations need to be accurate… (voir plus) and robust to avoid introducing bias in downstream analysis. In this paper, we propose an ensemble method that combines the strengths of a manifold learning-based imputation method called MAGIC and an autoencoder deep learning model. We call our method Deep MAGIC. Deep MAGIC is trained on a linear combination of the mean squared error of the original data and the mean squared error of the MAGIC-imputed data. Experimental results on three benchmark datasets show that Deep MAGIC outperforms several state-of-the-art imputation methods, demonstrating its effectiveness and robustness in handling large amounts of missing data.
Graph Fourier MMD for Signals on Graphs
Samuel Leone
Aarthi Venkat
Guillaume Huguet
Alexander Tong
Guy Wolf
Smita Krishnaswamy
While numerous methods have been proposed for computing distances between probability distributions in Euclidean space, relatively little at… (voir plus)tention has been given to computing such distances for distributions on graphs. However, there has been a marked increase in data that either lies on graph (such as protein interaction networks) or can be modeled as a graph (single cell data), particularly in the biomedical sciences. Thus, it becomes important to find ways to compare signals defined on such graphs. Here, we propose Graph Fourier MMD (GFMMD), a novel distance between distributions and signals on graphs. GFMMD is defined via an optimal witness function that is both smooth on the graph and maximizes the difference in expectation between the pair of distributions on the graph. We find an analytical solution to this optimization problem as well as an embedding of distributions that results from this method. We also prove several properties of this method including scale invariance and applicability to disconnected graphs. We showcase it on graph benchmark datasets as well on single cell RNA-sequencing data analysis. In the latter, we use the GFMMD-based gene embeddings to find meaningful gene clusters. We also propose a novel type of score for gene selection called gene localization score which helps select genes for cellular state space characterization.
Hybrid GRAND Sphere Decoding: Accelerated GRAND for Low-Rate Codes
Huayi Zhou
Guessing random additive noise decoding (GRAND) and sphere decoding (SD) are two algorithms that can achieve maximum likelihood decoding. In… (voir plus) this paper, a hybrid GRAND-SD (HGRAND) scheme is proposed to extend GRAND to low-rate codes. An accelerated GRAND decoder, assisted by a sphere decoder running in parallel and giving hints to it to allow skipping of certain candidates allows HGRAND to achieve a latency below the minimum latency of the individual component decoders while guaranteeing error-correction performance.
Manifold Alignment with Label Information
Andres F. Duque Correa
Myriam Lizotte
Guy Wolf
Kevin R. Moon
Multi-domain data is becoming increasingly common and presents both challenges and opportunities in the data science community. The integrat… (voir plus)ion of distinct data-views can be used for exploratory data analysis, and benefit downstream analysis including machine learning related tasks. With this in mind, we present a novel manifold alignment method called MALI (Manifold alignment with label information) that learns a correspondence between two distinct domains. MALI belongs to a middle ground between the more commonly addressed semi-supervised manifold alignment, where some known correspondences between the two domains are assumed to be known beforehand, and the purely unsupervised case, where no information linking both domains is available. To do this, MALI learns the manifold structure in both domains via a diffusion process and then leverages discrete class labels to guide the alignment. MALI recovers a pairing and a common representation that reveals related samples in both domains. We show that MALI outperforms the current state-of-the-art manifold alignment methods across multiple datasets.
ToxBuster: In-game Chat Toxicity Buster with BERT
Zachary Yang
Yasmine Maricar
M. Davari
Nicolas Grenon-Godbout
Detecting toxicity in online spaces is challenging and an ever more pressing problem given the increase in social media and gaming consumpti… (voir plus)on. We introduce ToxBuster, a simple and scalable model trained on a relatively large dataset of 194k lines of game chat from Rainbow Six Siege and For Honor, carefully annotated for different kinds of toxicity. Compared to the existing state-of-the-art, ToxBuster achieves 82.95% (+7) in precision and 83.56% (+57) in recall. This improvement is obtained by leveraging past chat history and metadata. We also study the implication towards real-time and post-game moderation as well as the model transferability from one game to another.
Training Acceleration of Frequency Domain CNNs Using Activation Compression
Seyyed Hasan Mozafari
James J. Clark
Brett Meyer
Reducing the complexity of training convolutional neural networks results in lower energy consumption expended during training, or higher ac… (voir plus)curacy by admitting a greater number of training epochs within a training time budget. During backpropagation, a considerable amount of temporary data is offloaded from GPU memory to CPU memory, increasing training time. In this paper, we address this training time overhead by introducing an activation compression technique for frequency domain convolutional neural networks. Applying this compression technique on frequency domain AlexNet results in activation compression of 57.7%, and a reduction of training time by 23%, with a negligible effect on classification accuracy.