Publications

MixupE: Understanding and improving Mixup from directional derivative perspective
Vikas Verma
Yingtian Zou
Sarthak Mittal
Wai Hoh Tang
Hieu Pham
Juho Kannala
Arno Solin
Kenji Kawaguchi
MixupE: Understanding and Improving Mixup from Directional Derivative Perspective
Vikas Verma
Yingtian Zou
Sarthak Mittal
Wai Hoh Tang
Hieu Pham
Juho Kannala
Arno Solin
Kenji Kawaguchi
Mixup is a popular data augmentation technique for training deep neural networks where additional samples are generated by linearly interpol… (voir plus)ating pairs of inputs and their labels. This technique is known to improve the generalization performance in many learning paradigms and applications. In this work, we first analyze Mixup and show that it implicitly regularizes infinitely many directional derivatives of all orders. Based on this new insight, we propose an improved version of Mixup, theoretically justified to deliver better generalization performance than the vanilla Mixup. To demonstrate the effectiveness of the proposed method, we conduct experiments across various domains such as images, tabular data, speech, and graphs. Our results show that the proposed method improves Mixup across multiple datasets using a variety of architectures, for instance, exhibiting an improvement over Mixup by 0.8% in ImageNet top-1 accuracy.
Motor cortex latent dynamics 1 encode arm movement direction and 2 urgency independently 3
Andrea Colins Rodriguez
Lee Miller
Mark D. Humphries
10 The fluid movement of an arm is controlled by multiple parameters that can be set 11 independently. Recent studies argue that arm moveme… (voir plus)nts are generated by the collective 12 dynamics of neurons in motor cortex. But how these collective dynamics simultaneously encode 13 and control multiple parameters of movement is an open question. Using a task where monkeys 14 made sequential, varied arm movements, we show that the direction and urgency of arm 15 movements are simultaneously encoded in the low-dimensional trajectories of population 16 activity: each movement’s direction by a fixed, looped neural trajectory and its urgency by how 17 quickly that trajectory was traversed. Network models showed this latent coding is potentially 18 advantageous as it allows the direction and urgency of arm movement to be independently 19 controlled. Our results suggest how low-dimensional neural dynamics can define multiple 20 parameters of goal-directed movement simultaneously. 21
Motor cortex latent dynamics 1 encode arm movement direction and 2 urgency independently 3
Andrea Colins Rodriguez
Lee Miller
Mark D. Humphries
10 The fluid movement of an arm is controlled by multiple parameters that can be set 11 independently. Recent studies argue that arm moveme… (voir plus)nts are generated by the collective 12 dynamics of neurons in motor cortex. But how these collective dynamics simultaneously encode 13 and control multiple parameters of movement is an open question. Using a task where monkeys 14 made sequential, varied arm movements, we show that the direction and urgency of arm 15 movements are simultaneously encoded in the low-dimensional trajectories of population 16 activity: each movement’s direction by a fixed, looped neural trajectory and its urgency by how 17 quickly that trajectory was traversed. Network models showed this latent coding is potentially 18 advantageous as it allows the direction and urgency of arm movement to be independently 19 controlled. Our results suggest how low-dimensional neural dynamics can define multiple 20 parameters of goal-directed movement simultaneously. 21
Multi-Agent Reinforcement Learning for Fast-Timescale Demand Response of Residential Loads
Vincent Mai
Philippe Maisonneuve
Tianyu Zhang
Hadi Nekoei
To integrate high amounts of renewable energy resources, electrical power grids must be able to cope with high amplitude, fast timescale var… (voir plus)iations in power generation. Frequency regulation through demand response has the potential to coordinate temporally flexible loads, such as air conditioners, to counteract these variations. Existing approaches for discrete control with dynamic constraints struggle to provide satisfactory performance for fast timescale action selection with hundreds of agents. We propose a decentralized agent trained with multi-agent proximal policy optimization with localized communication. We explore two communication frameworks: hand-engineered, or learned through targeted multi-agent communication. The resulting policies perform well and robustly for frequency regulation, and scale seamlessly to arbitrary numbers of houses for constant processing times.
Multi-Environment Pretraining Enables Transfer to Action Limited Datasets
David Venuto
Sherry Yang
Pieter Abbeel
Igor Mordatch
Ofir Nachum
Multi-Environment Pretraining Enables Transfer to Action Limited Datasets
David Venuto
Sherry Yang
Pieter Abbeel
Igor Mordatch
Ofir Nachum
Using massive datasets to train large-scale models has emerged as a dominant approach for broad generalization in natural language an… (voir plus)d vision applications. In reinforcement learning, however, a key challenge is that available data of sequential decision making is often not annotated with actions - for example, videos of game-play are much more available than sequences of frames paired with the logged game controls. We propose to circumvent this challenge by combining large but sparsely-annotated datasets from a \emph{target} environment of interest with fully-annotated datasets from various other \emph{source} environments. Our method, Action Limited PreTraining (ALPT), leverages the generalization capabilities of inverse dynamics modelling (IDM) to label missing action data in the target environment. We show that utilizing even one additional environment dataset of labelled data during IDM pretraining gives rise to substantial improvements in generating action labels for unannotated sequences. We evaluate our method on benchmark game-playing environments and show that we can significantly improve game performance and generalization capability compared to other approaches, even when using annotated datasets equivalent to only 12 minutes of gameplay.
Multivariate Time-Series Anomaly Detection with Temporal Self-supervision and Graphs: Application to Vehicle Failure Prediction
Hadi Hojjati
Mohammadreza Sadeghi
Neighbor Auto-Grouping Graph Neural Networks for Handover Parameter Configuration in Cellular Network
Mehrtash Mehrabi
Walid Masoudimansour
Yingxue Zhang
Jie Chuai
Zhitang Chen
Jianye Hao
Yanhui. Geng
Nesterov Meets Optimism: Rate-Optimal Separable Minimax Optimization
Chris Junchi Li
Huizhuo Yuan
Angela Yuan
Quanquan Gu
Michael Jordan
We propose a new first-order optimization algorithm — AcceleratedGradient-OptimisticGradient (AG-OG) Descent Ascent—for separable convex… (voir plus)-concave minimax optimization. The main idea of our algorithm is to carefully leverage the structure of the minimax problem, performing Nesterov acceleration on the individual component and optimistic gradient on the coupling component. Equipped with proper restarting, we show that AG-OG achieves the optimal convergence rate (up to a constant) for a variety of settings, including bilinearly coupled strongly convex-strongly concave minimax optimization (bi-SC-SC), bilinearly coupled convex-strongly concave minimax optimization (bi-C-SC), and bilinear games. We also extend our algorithm to the stochastic setting and achieve the optimal convergence rate in both bi-SC-SC and bi-C-SC settings. AG-OG is the first single-call algorithm with optimal convergence rates in both deterministic and stochastic settings for bilinearly coupled minimax optimization problems.
Nesterov Meets Optimism: Rate-Optimal Separable Minimax Optimization
Chris Junchi Li
Angela Yuan
Quanquan Gu
Michael Jordan
We propose a new first-order optimization algorithm --- AcceleratedGradient-OptimisticGradient (AG-OG) Descent Ascent---for separable convex… (voir plus)-concave minimax optimization. The main idea of our algorithm is to carefully leverage the structure of the minimax problem, performing Nesterov acceleration on the individual component and optimistic gradient on the coupling component. Equipped with proper restarting, we show that AG-OG achieves the optimal convergence rate (up to a constant) for a variety of settings, including bilinearly coupled strongly convex-strongly concave minimax optimization (bi-SC-SC), bilinearly coupled convex-strongly concave minimax optimization (bi-C-SC), and bilinear games. We also extend our algorithm to the stochastic setting and achieve the optimal convergence rate in both bi-SC-SC and bi-C-SC settings. AG-OG is the first single-call algorithm with optimal convergence rates in both deterministic and stochastic settings for bilinearly coupled minimax optimization problems.
NEURAL MANIFOLDS AND GRADIENT-BASED ADAPTATION IN NEURAL-INTERFACE TASKS
Alexandre Payeur
Amy L. Orsborn
. Neural activity tends to reside on manifolds whose dimension is much lower than the dimension of the whole neural state space. Experiments… (voir plus) using brain-computer interfaces with microelectrode arrays implanted in the motor cortex of nonhuman primates tested the hypothesis that external perturbations should produce different adaptation strategies depending on how “aligned” the perturbation is with respect to a pre-existing intrinsic manifold. On the one hand, perturbations within the manifold (WM) evoked fast reassociations of existing patterns for rapid adaptation. On the other hand, perturbations outside the manifold (OM) triggered the slow emergence of new neural patterns underlying a much slower—and, without adequate training protocols, inconsistent or virtually impossible—adaptation. This suggests that the time scale and the overall difficulty of the brain to adapt depend fundamentally on the structure of neural activity. Here, we used a simplified static Gaussian model to show that gradient-descent learning could explain the differences between adaptation to WM and OM perturbations. For small learning rates, we found that the adaptation speeds were different but the model eventually adapted to both perturbations. Moreover, sufficiently large learning rates could entirely prohibit adaptation to OM perturbations while preserving adaptation to WM perturbations, in agreement with experiments. Adopting an incremental training protocol, as has been done in experiments, permitted a swift recovery of a full adaptation in the cases where OM perturbations were previously impossible to relearn. Finally, we also found that gradient descent was compatible with the reassociation mechanism on short adaptation time scales. Since gradient descent has many biologically plausible variants, our findings thus establish gradient-based learning as a plausible mechanism for adaptation under network-level constraints, with a central role for the learning rate.