2021-12
Invariance Principle Meets Information Bottleneck for Out-of-Distribution Generalization
Stochastic Gradient Descent-Ascent and Consensus Optimization for Smooth Games: Convergence Analysis under Expected Co-coercivity
NEURIPS 2021
(2021-12-06)
proceedings.neurips.ccPDF[Also on arXiv preprint arXiv:2107.00052 (2021-06-30)]2021-10
Stochastic Mirror Descent: Convergence Analysis and Adaptive Variants via the Mirror Stochastic Polyak Stepsize
Convergence Analysis and Implicit Regularization of Feedback Alignment for Deep Linear Networks.
2021-09
Gotta Go Fast with Score-Based Generative Models
2021-06
Invariance Principle Meets Information Bottleneck for Out-of-Distribution Generalization
2021-05
Gotta Go Fast When Generating Data with Score-Based Models.
Gradient penalty from a maximum margin perspective
Adversarial score matching and improved sampling for image generation
2020-12
A Study of Condition Numbers for First-Order Optimization. (arXiv:2012.05782v1 [cs.LG])
arXiv Computer Science
(2020-12-11)
2020-09
Adversarial score matching and improved sampling for image generation
2020-07
Linear Lower Bounds and Conditioning of Differentiable Games
2020-06
A Tight and Unified Analysis of Gradient-Based Methods for a Whole Spectrum of Differentiable Games.
Accelerating Smooth Games by Manipulating Spectral Shapes.
AISTATS 2020
(2020-06-03)
proceedings.mlr.pressPDF[Also on arXiv preprint arXiv:2001.00602 (2020-01-02)]2020-01
A Study of Condition Numbers for First-Order Optimization.
AISTATS 2020
(2020-01-01)
proceedings.mlr.pressPDF[LATEST on arXiv preprint arXiv:2012.05782 (2020-12-10)]2019-11
Generalizing to unseen domains via distribution matching
Adversarial target-invariant representation learning for domain generalization
2019-10
Connections between Support Vector Machines, Wasserstein distance and gradient-penalty GANs
2019-06
Lower Bounds and Conditioning of Differentiable Games.
A Unified Analysis of Gradient-Based Methods for a Whole Spectrum of Games
A Tight and Unified Analysis of Extragradient for a Whole Spectrum of Differentiable Games.
2019-05
State-Reification Networks: Improving Generalization by Modeling the Distribution of Hidden Representations
Multi-objective training of Generative Adversarial Networks with multiple discriminators
State-Reification Networks: Improving Generalization by Modeling the Distribution of Hidden Representations
Manifold Mixup: Better Representations by Interpolating Hidden States
In Support of Over-Parametrization in Deep Reinforcement Learning: an Empirical Study
2019-04
Negative Momentum for Improved Game Dynamics
2019-03
MLSys: The New Frontier of Machine Learning Systems
2019-01
Reducing the variance in online optimization by transporting past gradients
2018-10
A Modern Take on the Bias-Variance Tradeoff in Neural Networks
2018-09
Fortified Networks: Improving the Robustness of Deep Networks by Modeling the Manifold of Hidden Representations
h-detach: Modifying the LSTM Gradient Towards Better Optimization
ICLR 2018
(2018-09-27)
ui.adsabs.harvard.eduPDF[LATEST on arXiv preprint arXiv:1810.03023 (2018-10-06)]Manifold Mixup: Learning Better Representations by Interpolating Hidden States
2018-07
Negative Momentum for Improved Game Dynamics
2018-06
Manifold Mixup: Better Representations by Interpolating Hidden States.
Manifold Mixup: Encouraging Meaningful On-Manifold Interpolation as a Regularizer.
2018-03
Accelerated Stochastic Power Iteration
2018-02
YellowFin and the Art of Momentum Tuning
Learning Generative Models with Locally Disentangled Latent Factors
2018-01
Learning Representations and Generative Models for 3D Point Clouds.
Publications collected and formatted using Paperoni