Publications

Contrasting Intra-Modal and Ranking Cross-Modal Hard Negatives to Enhance Visio-Linguistic Fine-grained Understanding
Le Zhang
Md. Rabiul Awal
Contrastive Positive Unlabeled Learning
Anish Acharya
Sujay Sanghavi
Li Jing
Bhargav Bhushanam
I. Dhillon
Self-supervised pretraining on unlabeled data followed by supervised fine-tuning on labeled data is a popular paradigm for learning from lim… (voir plus)ited labeled examples. We extend this paradigm to the classical positive unlabeled (PU) setting, where the task is to learn a binary classifier given only a few labeled positive samples, and (often) a large amount of unlabeled samples (which could be positive or negative). We first propose a simple extension of standard infoNCE family of contrastive losses, to the PU setting; and show that this learns superior representations, as compared to existing unsupervised and supervised approaches. We then develop a simple methodology to pseudo-label the unlabeled samples using a new PU-specific clustering scheme; these pseudo-labels can then be used to train the final (positive vs. negative) classifier. Our method handily outperforms state-of-the-art PU methods over several standard PU benchmark datasets, while not requiring a-priori knowledge of any class prior (which is a common assumption in other PU methods). We also provide a simple theoretical analysis that motivates our methods.
Convergence of Proximal Point and Extragradient-Based Methods Beyond Monotonicity: the Case of Negative Comonotonicity
Eduard Gorbunov
Adrien Taylor
Samuel Horváth
Algorithms for min-max optimization and variational inequalities are often studied under monotonicity assumptions. Motivated by non-monotone… (voir plus) machine learning applications, we follow the line of works (Diakonikolas et al., 2021; Lee & Kim, 2021; Pethick et al., 2022; Bohm,2022) aiming at going beyond monotonicity by considering the weaker *negative comonotonicity* assumption. In this work, we provide tight complexity analyses for the Proximal Point (PP), Extragradient (EG), and Optimistic Gradient (OG) methods in this setup, closing several questions on their working guarantees beyond monotonicity. In particular, we derive the first non-asymptotic convergence rates for PP under negative comonotonicity and star-negative comonotonicity and show their tightness via constructing worst-case examples; we also relax the assumptions for the last-iterate convergence guarantees for EG and OG and prove the tightness of the existing best-iterate guarantees for EG and OG via constructing counter-examples.
Cutting Planes from the Branch-and-Bound Tree: Challenges and Opportunities
Claudio Contardo
Andrea Tramontani
DASVDD: Deep Autoencoding Support Vector Data Descriptor for Anomaly Detection
Hadi Hojjati
Semi-supervised anomaly detection aims to detect anomalies from normal samples using a model that is trained on normal data. With recent adv… (voir plus)ancements in deep learning, researchers have designed efficient deep anomaly detection methods. Existing works commonly use neural networks to map the data into a more informative representation and then apply an anomaly detection algorithm. In this paper, we propose a method, DASVDD, that jointly learns the parameters of an autoencoder while minimizing the volume of an enclosing hyper-sphere on its latent representation. We propose an anomaly score which is a combination of autoencoder's reconstruction error and the distance from the center of the enclosing hypersphere in the latent representation. Minimizing this anomaly score aids us in learning the underlying distribution of the normal class during training. Including the reconstruction error in the anomaly score ensures that DASVDD does not suffer from the common hypersphere collapse issue since the DASVDD model does not converge to the trivial solution of mapping all inputs to a constant point in the latent representation. Experimental evaluations on several benchmark datasets show that the proposed method outperforms the commonly used state-of-the-art anomaly detection algorithms while maintaining robust performance across different anomaly classes.
Deep Multirepresentation Learning for Data Clustering.
Mohammadreza Sadeghi
Deep clustering incorporates embedding into clustering in order to find a lower-dimensional space suitable for clustering tasks. Conventiona… (voir plus)l deep clustering methods aim to obtain a single global embedding subspace (aka latent space) for all the data clusters. In contrast, in this article, we propose a deep multirepresentation learning (DML) framework for data clustering whereby each difficult-to-cluster data group is associated with its own distinct optimized latent space and all the easy-to-cluster data groups are associated with a general common latent space. Autoencoders (AEs) are employed for generating cluster-specific and general latent spaces. To specialize each AE in its associated data cluster(s), we propose a novel and effective loss function which consists of weighted reconstruction and clustering losses of the data points, where higher weights are assigned to the samples more probable to belong to the corresponding cluster(s). Experimental results on benchmark datasets demonstrate that the proposed DML framework and loss function outperform state-of-the-art clustering approaches. In addition, the results show that the DML method significantly outperforms the SOTA on imbalanced datasets as a result of assigning an individual latent space to the difficult clusters.
Deep Networks as Paths on the Manifold of Neural Representations
Richard D Lange
Devin Kwok
Jordan Kyle Matelsky
Xinyue Wang
Konrad Paul Kording
Definitive Care for Severely Injured Children in Quebec
Mélyssa Fortin
Zoe Atsaidis
Brent Hopkins
Etienne St-Louis
Elena Guadagno
Debbie Friedman
A Distributed Pricing Strategy for Edge Computation Offloading Optimization in Autonomous Driving
Jie Tang
Weilin Zhu
Xiaoming Li
Shaoshan Liu
The increase of on-vehicle applications has brought explosive computation demands to autonomous vehicles and overwhelmed their limited onboa… (voir plus)rd resources. Edge computing can offload application load and effectively alleviate this problem. However, the introduction of edge computing faces significant challenges, including the considerable amount of resource contention due to the scarcity of edge resources and the competition among edge computing resource providers to earn users’ services requests. We notice that the problem is not purely technical as solutions for these two problems can become conflicting to each other. In this paper, we propose a distributed pricing strategy to achieve full use of computing resources at the edge and maximize the revenue of service operators, both with guaranteed quality-of-service of on-vehicle applications. More specifically, we first use the multi-leader multi-follower Stackelberg game theory to model the pricing of on-vehicle task offloading under edge computing. Next, we propose a distributed pricing strategy to enable edge servers to adjust their local price distributions so that edge servers can bargain with offloading requesters independently. Experimental results confirm that the proposed distributed pricing strategy can provide more optimized server computing resource utilization while guaranteeing the performance of in-vehicle applications.
On Dynamic Program Decompositions of Static Risk Measures
Jia Lin Hau
Mohammad Ghavamzadeh
Marek Petrik
Optimizing static risk-averse objectives in Markov decision processes is challenging because they do not readily admit dynamic programming d… (voir plus)ecompositions. Prior work has proposed to use a dynamic decomposition of risk measures that help to formulate dynamic programs on an augmented state space. This paper shows that several existing decompositions are inherently inexact, contradicting several claims in the literature. In particular, we give examples that show that popular decompositions for CVaR and EVaR risk measures are strict overestimates of the true risk values. However, an exact decomposition is possible for VaR, and we give a simple proof that illustrates the fundamental difference between VaR and CVaR dynamic programming properties.
DynGFN: Bayesian Dynamic Causal Discovery using Generative Flow Networks
Lazar Atanackovic
Alexander Tong
Jason Hartford
Leo Jingyu Lee
Bo Wang
Learning the causal structure of observable variables is a central focus for scientific discovery. Bayesian causal discovery methods tackle… (voir plus) this problem by learning a posterior over the set of admissible graphs given our priors and observations. Existing methods primarily consider observations from static systems and assume the underlying causal structure takes the form of a directed acyclic graph (DAG). In settings with dynamic feedback mechanisms that regulate the trajectories of individual variables, this acyclicity assumption fails unless we account for time. We focus on learning Bayesian posteriors over cyclic graphs and treat causal discovery as a problem of sparse identification of a dynamical sys-tem. This imposes a natural temporal causal order between variables and captures cyclic feedback loops through time. Under this lens, we propose a new framework for Bayesian causal discovery for dynamical systems and present a novel generative flow network architecture (DynGFN) tailored for this task. Our results indicate that DynGFN learns posteriors that better encapsulate the distributions over admissible cyclic causal structures compared to counterpart state-of-the-art approaches.
An Empirical Investigation of the Role of Pre-training in Lifelong Learning
Sanket Vaibhav Mehta
Darshan Patil
Emma Strubell
The lifelong learning paradigm in machine learning is an attractive alternative to the more prominent isolated learning scheme not only due … (voir plus)to its resemblance to biological learning, but also its potential to reduce energy waste by obviating excessive model re-training. A key challenge to this paradigm is the phenomenon of catastrophic forgetting. With the increasing popularity and success of pre-trained models in machine learning, we pose the question: What role does pre-training play in lifelong learning, specifically with respect to catastrophic forgetting? We investigate existing methods in the context of large, pre-trained models and evaluate their performance on a variety of text and image classification tasks, including a large-scale study using a novel dataset of 15 diverse NLP tasks. Across all settings, we observe that generic pre-training implicitly alleviates the effects of catastrophic forgetting when learning multiple tasks sequentially compared to randomly initialized models. We then further investigate why pre-training alleviates forgetting in this setting. We study this phenomenon by analyzing the loss landscape, finding that pre-trained weights appear to ease forgetting by leading to wider minima. Based on this insight, we propose jointly optimizing for current task loss and loss basin sharpness in order to explicitly encourage wider basins during sequential fine-tuning. We show that this optimization approach leads to performance comparable to the state-of-the-art in task-sequential continual learning across multiple settings, without retaining a memory that scales in size with the number of tasks.