Search Results for author: Quanquan Gu

Found 131 papers, 19 papers with code

Padam: Closing the Generalization Gap of Adaptive Gradient Methods in Training Deep Neural Networks

no code implementations ICLR 2019 Jinghui Chen, Quanquan Gu

Experiments on standard benchmarks show that Padam can maintain fast convergence rate as Adam/Amsgrad while generalizing as well as SGD in training deep neural networks.

Nearly Optimal Algorithms for Linear Contextual Bandits with Adversarial Corruptions

no code implementations13 May 2022 Jiafan He, Dongruo Zhou, Tong Zhang, Quanquan Gu

We show that for both known $C$ and unknown $C$ cases, our algorithm with proper choice of hyperparameter achieves a regret that nearly matches the lower bounds.

Multi-Armed Bandits

On the Convergence of Certified Robust Training with Interval Bound Propagation

no code implementations ICLR 2022 Yihan Wang, Zhouxing Shi, Quanquan Gu, Cho-Jui Hsieh

Interval Bound Propagation (IBP) is so far the base of state-of-the-art methods for training neural networks with certifiable robustness guarantees when potential adversarial perturbations present, while the convergence of IBP training remains unknown in existing literature.

Risk Bounds of Multi-Pass SGD for Least Squares in the Interpolation Regime

no code implementations7 Mar 2022 Difan Zou, Jingfeng Wu, Vladimir Braverman, Quanquan Gu, Sham M. Kakade

Stochastic gradient descent (SGD) has achieved great success due to its superior performance in both optimization and generalization.

Bandit Learning with General Function Classes: Heteroscedastic Noise and Variance-dependent Regret Bounds

no code implementations28 Feb 2022 Heyang Zhao, Dongruo Zhou, Jiafan He, Quanquan Gu

For generalized linear bandits, we further propose an algorithm based on follow-the-regularized-leader (FTRL) subroutine and online-to-confidence-set conversion, which can achieve a tighter variance-dependent regret under certain conditions.

online learning

Benign Overfitting in Two-layer Convolutional Neural Networks

no code implementations14 Feb 2022 Yuan Cao, Zixiang Chen, Mikhail Belkin, Quanquan Gu

In this paper, we study the benign overfitting phenomenon in training a two-layer convolutional neural network (CNN).

Learning Neural Contextual Bandits Through Perturbed Rewards

no code implementations ICLR 2022 Yiling Jia, Weitong Zhang, Dongruo Zhou, Quanquan Gu, Hongning Wang

Thanks to the power of representation learning, neural contextual bandit algorithms demonstrate remarkable performance improvement against their classical counterparts.

Multi-Armed Bandits Representation Learning

Benign Overfitting in Adversarially Robust Linear Classification

no code implementations31 Dec 2021 Jinghui Chen, Yuan Cao, Quanquan Gu

Our result suggests that under moderate perturbations, adversarially trained linear classifiers can achieve the near-optimal standard and adversarial risks, despite overfitting the noisy training data.

Classification

On the Convergence and Robustness of Adversarial Training

no code implementations15 Dec 2021 Yisen Wang, Xingjun Ma, James Bailey, JinFeng Yi, BoWen Zhou, Quanquan Gu

In this paper, we propose such a criterion, namely First-Order Stationary Condition for constrained optimization (FOSC), to quantitatively evaluate the convergence quality of adversarial examples found in the inner maximization.

Learning Stochastic Shortest Path with Linear Function Approximation

no code implementations25 Oct 2021 Yifei Min, Jiafan He, Tianhao Wang, Quanquan Gu

To the best of our knowledge, this is the first algorithm with a sublinear regret guarantee for learning linear mixture SSP.

Faster Perturbed Stochastic Gradient Methods for Finding Local Minima

no code implementations NeurIPS 2021 Zixiang Chen, Dongruo Zhou, Quanquan Gu

In this paper, we propose LENA (Last stEp shriNkAge), a faster perturbed stochastic gradient framework for finding local minima.

Linear Contextual Bandits with Adversarial Corruptions

no code implementations NeurIPS 2021 Heyang Zhao, Dongruo Zhou, Quanquan Gu

We study the linear contextual bandit problem in the presence of adversarial corruption, where the interaction between the player and a possibly infinite decision set is contaminated by an adversary that can corrupt the reward up to a corruption level $C$ measured by the sum of the largest alteration on rewards in each round.

Multi-Armed Bandits

Locally Differentially Private Reinforcement Learning for Linear Mixture Markov Decision Processes

no code implementations19 Oct 2021 Chonghua Liao, Jiafan He, Quanquan Gu

To the best of our knowledge, this is the first provable privacy-preserving RL algorithm with linear function approximation.

reinforcement-learning

Adaptive Differentially Private Empirical Risk Minimization

no code implementations14 Oct 2021 Xiaoxia Wu, Lingxiao Wang, Irina Cristali, Quanquan Gu, Rebecca Willett

We propose an adaptive (stochastic) gradient perturbation method for differentially private empirical risk minimization.

Last Iterate Risk Bounds of SGD with Decaying Stepsize for Overparameterized Linear Regression

no code implementations12 Oct 2021 Jingfeng Wu, Difan Zou, Vladimir Braverman, Quanquan Gu, Sham M. Kakade

In this paper, we provide problem-dependent analysis on the last iterate risk bounds of SGD with decaying stepsize, for (overparameterized) linear regression problems.

Reward-Free Model-Based Reinforcement Learning with Linear Function Approximation

no code implementations NeurIPS 2021 Weitong Zhang, Dongruo Zhou, Quanquan Gu

By constructing a special class of linear Mixture MDPs, we also prove that for any reward-free algorithm, it needs to sample at least $\tilde \Omega(H^2d\epsilon^{-2})$ episodes to obtain an $\epsilon$-optimal policy.

Model-based Reinforcement Learning reinforcement-learning

Adaptive Sampling for Heterogeneous Rank Aggregation from Noisy Pairwise Comparisons

no code implementations8 Oct 2021 Yue Wu, Tao Jin, Hao Lou, Pan Xu, Farzad Farnoud, Quanquan Gu

In heterogeneous rank aggregation problems, users often exhibit various accuracy levels when comparing pairs of items.

Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks

1 code implementation NeurIPS 2021 Hanxun Huang, Yisen Wang, Sarah Monazam Erfani, Quanquan Gu, James Bailey, Xingjun Ma

Specifically, we make the following key observations: 1) more parameters (higher model capacity) does not necessarily help adversarial robustness; 2) reducing capacity at the last stage (the last group of blocks) of the network can actually improve adversarial robustness; and 3) under the same parameter budget, there exists an optimal architectural configuration for adversarial robustness.

Adversarial Robustness

Iterative Teacher-Aware Learning

no code implementations NeurIPS 2021 Luyao Yuan, Dongruo Zhou, Junhong Shen, Jingdong Gao, Jeffrey L. Chen, Quanquan Gu, Ying Nian Wu, Song-Chun Zhu

Recently, the benefits of integrating this cooperative pedagogy into machine concept learning in discrete spaces have been proved by multiple works.

Understanding the Generalization of Adam in Learning Neural Networks with Proper Regularization

no code implementations25 Aug 2021 Difan Zou, Yuan Cao, Yuanzhi Li, Quanquan Gu

In this paper, we provide a theoretical explanation for this phenomenon: we show that in the nonconvex setting of learning over-parameterized two-layer convolutional neural networks starting from the same random initialization, for a class of data distributions (inspired from image data), Adam and gradient descent (GD) can converge to different global solutions of the training objective with provably different generalization errors, even with weight decay regularization.

Image Classification

The Benefits of Implicit Regularization from SGD in Least Squares Problems

no code implementations NeurIPS 2021 Difan Zou, Jingfeng Wu, Vladimir Braverman, Quanquan Gu, Dean P. Foster, Sham M. Kakade

Stochastic gradient descent (SGD) exhibits strong algorithmic regularization effects in practice, which has been hypothesized to play an important role in the generalization of modern machine learning approaches.

Self-training Converts Weak Learners to Strong Learners in Mixture Models

no code implementations25 Jun 2021 Spencer Frei, Difan Zou, Zixiang Chen, Quanquan Gu

We show that there exists a universal constant $C_{\mathrm{err}}>0$ such that if a pseudolabeler $\boldsymbol{\beta}_{\mathrm{pl}}$ can achieve classification error at most $C_{\mathrm{err}}$, then for any $\varepsilon>0$, an iterative self-training algorithm initialized at $\boldsymbol{\beta}_0 := \boldsymbol{\beta}_{\mathrm{pl}}$ using pseudolabels $\hat y = \mathrm{sgn}(\langle \boldsymbol{\beta}_t, \mathbf{x}\rangle)$ and using at most $\tilde O(d/\varepsilon^2)$ unlabeled examples suffices to learn the Bayes-optimal classifier up to $\varepsilon$ error, where $d$ is the ambient dimension.

Proxy Convexity: A Unified Framework for the Analysis of Neural Networks Trained by Gradient Descent

no code implementations NeurIPS 2021 Spencer Frei, Quanquan Gu

We further show that many existing guarantees for neural networks trained by gradient descent can be unified through proxy convexity and proxy PL inequalities.

Provably Efficient Representation Learning in Low-rank Markov Decision Processes

no code implementations22 Jun 2021 Weitong Zhang, Jiafan He, Dongruo Zhou, Amy Zhang, Quanquan Gu

The success of deep reinforcement learning (DRL) is due to the power of learning a representation that is suitable for the underlying exploration and exploitation task.

reinforcement-learning Representation Learning

Variance-Aware Off-Policy Evaluation with Linear Function Approximation

no code implementations NeurIPS 2021 Yifei Min, Tianhao Wang, Dongruo Zhou, Quanquan Gu

We study the off-policy evaluation (OPE) problem in reinforcement learning with linear function approximation, which aims to estimate the value function of a target policy based on the offline data collected by a behavior policy.

reinforcement-learning

Pure Exploration in Kernel and Neural Bandits

no code implementations NeurIPS 2021 Yinglun Zhu, Dongruo Zhou, Ruoxi Jiang, Quanquan Gu, Rebecca Willett, Robert Nowak

To overcome the curse of dimensionality, we propose to adaptively embed the feature representation of each arm into a lower-dimensional space and carefully deal with the induced model misspecification.

Uniform-PAC Bounds for Reinforcement Learning with Linear Function Approximation

no code implementations NeurIPS 2021 Jiafan He, Dongruo Zhou, Quanquan Gu

The uniform-PAC guarantee is the strongest possible guarantee for reinforcement learning in the literature, which can directly imply both PAC and high probability regret bounds, making our algorithm superior to all existing algorithms with linear function approximation.

reinforcement-learning

Provable Robustness of Adversarial Training for Learning Halfspaces with Noise

no code implementations19 Apr 2021 Difan Zou, Spencer Frei, Quanquan Gu

To the best of our knowledge, this is the first work to show that adversarial training provably yields robust classifiers in the presence of noise.

Classification General Classification +1

Benign Overfitting of Constant-Stepsize SGD for Linear Regression

no code implementations23 Mar 2021 Difan Zou, Jingfeng Wu, Vladimir Braverman, Quanquan Gu, Sham M. Kakade

More specifically, for SGD with iterate averaging, we demonstrate the sharpness of the established excess risk bound by proving a matching lower bound (up to constant factors).

Batched Neural Bandits

no code implementations25 Feb 2021 Quanquan Gu, Amin Karbasi, Khashayar Khosravi, Vahab Mirrokni, Dongruo Zhou

In many sequential decision-making problems, the individuals are split into several batches and the decision-maker is only allowed to change her policy at the end of batches.

Decision Making

Near-optimal Policy Optimization Algorithms for Learning Adversarial Linear Mixture MDPs

no code implementations17 Feb 2021 Jiafan He, Dongruo Zhou, Quanquan Gu

In this paper, we study RL in episodic MDPs with adversarial reward and full information feedback, where the unknown transition probability function is a linear function of a given feature mapping, and the reward function can change arbitrarily episode by episode.

Almost Optimal Algorithms for Two-player Zero-Sum Linear Mixture Markov Games

no code implementations15 Feb 2021 Zixiang Chen, Dongruo Zhou, Quanquan Gu

To assess the optimality of our algorithm, we also prove an $\tilde{\Omega}( dH\sqrt{T})$ lower bound on the regret.

Nearly Minimax Optimal Regret for Learning Infinite-horizon Average-reward MDPs with Linear Function Approximation

no code implementations15 Feb 2021 Yue Wu, Dongruo Zhou, Quanquan Gu

We study reinforcement learning in an infinite-horizon average-reward setting with linear function approximation, where the transition probability function of the underlying Markov Decision Process (MDP) admits a linear form over a feature mapping of the current state, action, and next state.

Provably Efficient Reinforcement Learning with Linear Function Approximation Under Adaptivity Constraints

no code implementations NeurIPS 2021 Tianhao Wang, Dongruo Zhou, Quanquan Gu

In specific, for the batch learning model, our proposed LSVI-UCB-Batch algorithm achieves an $\tilde O(\sqrt{d^3H^3T} + dHT/B)$ regret, where $d$ is the dimension of the feature mapping, $H$ is the episode length, $T$ is the number of interactions and $B$ is the number of batches.

reinforcement-learning

Provable Generalization of SGD-trained Neural Networks of Any Width in the Presence of Adversarial Label Noise

1 code implementation4 Jan 2021 Spencer Frei, Yuan Cao, Quanquan Gu

We consider a one-hidden-layer leaky ReLU network of arbitrary width trained by stochastic gradient descent (SGD) following an arbitrary initialization.

Nearly Minimax Optimal Reinforcement Learning for Linear Mixture Markov Decision Processes

no code implementations15 Dec 2020 Dongruo Zhou, Quanquan Gu, Csaba Szepesvari

Based on the new inequality, we propose a new, computationally efficient algorithm with linear function approximation named $\text{UCRL-VTR}^{+}$ for the aforementioned linear mixture MDPs in the episodic undiscounted setting.

reinforcement-learning

Neural Contextual Bandits with Deep Representation and Shallow Exploration

no code implementations NeurIPS 2021 Pan Xu, Zheng Wen, Handong Zhao, Quanquan Gu

We study a general class of contextual bandits, where each context-action pair is associated with a raw feature vector, but the reward generating function is unknown.

Multi-Armed Bandits Representation Learning

A Finite-Time Analysis of Two Time-Scale Actor-Critic Methods

no code implementations NeurIPS 2020 Yue Wu, Weitong Zhang, Pan Xu, Quanquan Gu

In this work, we provide a non-asymptotic analysis for two time-scale actor-critic methods under non-i. i. d.

Logarithmic Regret for Reinforcement Learning with Linear Function Approximation

no code implementations23 Nov 2020 Jiafan He, Dongruo Zhou, Quanquan Gu

Reinforcement learning (RL) with linear function approximation has received increasing attention recently.

reinforcement-learning

Provable Multi-Objective Reinforcement Learning with Generative Models

no code implementations19 Nov 2020 Dongruo Zhou, Jiahao Chen, Quanquan Gu

Multi-objective reinforcement learning (MORL) is an extension of ordinary, single-objective reinforcement learning (RL) that is applicable to many real-world tasks where multiple objectives exist without known relative costs.

Q-Learning reinforcement-learning

Direction Matters: On the Implicit Bias of Stochastic Gradient Descent with Moderate Learning Rate

no code implementations ICLR 2021 Jingfeng Wu, Difan Zou, Vladimir Braverman, Quanquan Gu

Understanding the algorithmic bias of \emph{stochastic gradient descent} (SGD) is one of the key challenges in modern machine learning and deep learning theory.

Learning Theory

Faster Convergence of Stochastic Gradient Langevin Dynamics for Non-Log-Concave Sampling

no code implementations19 Oct 2020 Difan Zou, Pan Xu, Quanquan Gu

We provide a new convergence analysis of stochastic gradient Langevin dynamics (SGLD) for sampling from a class of distributions that can be non-log-concave.

Efficient Robust Training via Backward Smoothing

1 code implementation3 Oct 2020 Jinghui Chen, Yu Cheng, Zhe Gan, Quanquan Gu, Jingjing Liu

In this work, we develop a new understanding towards Fast Adversarial Training, by viewing random initialization as performing randomized smoothing for better optimization of the inner maximization problem.

Do Wider Neural Networks Really Help Adversarial Robustness?

1 code implementation NeurIPS 2021 Boxi Wu, Jinghui Chen, Deng Cai, Xiaofei He, Quanquan Gu

Previous empirical results suggest that adversarial training requires wider networks for better performances.

Adversarial Robustness

Neural Thompson Sampling

3 code implementations ICLR 2021 Weitong Zhang, Dongruo Zhou, Lihong Li, Quanquan Gu

Thompson Sampling (TS) is one of the most effective algorithms for solving contextual multi-armed bandit problems.

Nearly Minimax Optimal Reinforcement Learning for Discounted MDPs

no code implementations NeurIPS 2021 Jiafan He, Dongruo Zhou, Quanquan Gu

We study the reinforcement learning problem for discounted Markov Decision Processes (MDPs) under the tabular setting.

reinforcement-learning

Agnostic Learning of Halfspaces with Gradient Descent via Soft Margins

no code implementations1 Oct 2020 Spencer Frei, Yuan Cao, Quanquan Gu

We analyze the properties of gradient descent on convex surrogates for the zero-one loss for the agnostic learning of linear halfspaces.

General Classification

Provably Efficient Reinforcement Learning for Discounted MDPs with Feature Mapping

no code implementations23 Jun 2020 Dongruo Zhou, Jiafan He, Quanquan Gu

We propose a novel algorithm that makes use of the feature mapping and obtains a $\tilde O(d\sqrt{T}/(1-\gamma)^2)$ regret, where $d$ is the dimension of the feature space, $T$ is the time horizon and $\gamma$ is the discount factor of the MDP.

reinforcement-learning

Agnostic Learning of a Single Neuron with Gradient Descent

no code implementations NeurIPS 2020 Spencer Frei, Yuan Cao, Quanquan Gu

In the agnostic PAC learning setting, where no assumption on the relationship between the labels $y$ and the input $x$ is made, if the optimal population risk is $\mathsf{OPT}$, we show that gradient descent achieves population risk $O(\mathsf{OPT})+\epsilon$ in polynomial time and sample complexity when $\sigma$ is strictly increasing.

Revisiting Membership Inference Under Realistic Assumptions

1 code implementation21 May 2020 Bargav Jayaraman, Lingxiao Wang, Katherine Knipmeyer, Quanquan Gu, David Evans

Since previous inference attacks fail in imbalanced prior setting, we develop a new inference attack based on the intuition that inputs corresponding to training set members will be near a local minimum in the loss function, and show that an attack that combines this with thresholds on the per-instance loss can achieve high PPV even in settings where other attacks appear to be ineffective.

Inference Attack

A Finite Time Analysis of Two Time-Scale Actor Critic Methods

no code implementations4 May 2020 Yue Wu, Weitong Zhang, Pan Xu, Quanquan Gu

In this work, we provide a non-asymptotic analysis for two time-scale actor-critic methods under non-i. i. d.

Improving Neural Language Generation with Spectrum Control

no code implementations ICLR 2020 Lingxiao Wang, Jing Huang, Kevin Huang, Ziniu Hu, Guangtao Wang, Quanquan Gu

Recent Transformer-based models such as Transformer-XL and BERT have achieved huge success on various natural language processing tasks.

Language Modelling Machine Translation +2

Differentially Private Federated Learning with Laplacian Smoothing

no code implementations1 May 2020 Zhicong Liang, Bao Wang, Quanquan Gu, Stanley Osher, Yuan YAO

Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.

Federated Learning

Improving Adversarial Robustness Requires Revisiting Misclassified Examples

no code implementations ICLR 2020 Yisen Wang, Difan Zou, Jin-Feng Yi, James Bailey, Xingjun Ma, Quanquan Gu

In this paper, we investigate the distinctive influence of misclassified and correctly classified examples on the final robustness of adversarial training.

Adversarial Robustness

MOTS: Minimax Optimal Thompson Sampling

no code implementations3 Mar 2020 Tianyuan Jin, Pan Xu, Jieming Shi, Xiaokui Xiao, Quanquan Gu

Thompson sampling is one of the most widely used algorithms for many online decision problems, due to its simplicity in implementation and superior empirical performance over other state-of-the-art methods.

On the Global Convergence of Training Deep Linear ResNets

no code implementations ICLR 2020 Difan Zou, Philip M. Long, Quanquan Gu

We further propose a modified identity input and output transformations, and show that a $(d+k)$-wide neural network is sufficient to guarantee the global convergence of GD/SGD, where $d, k$ are the input and output dimensions respectively.

Understanding the Intrinsic Robustness of Image Distributions using Conditional Generative Models

1 code implementation1 Mar 2020 Xiao Zhang, Jinghui Chen, Quanquan Gu, David Evans

Starting with Gilmer et al. (2018), several works have demonstrated the inevitability of adversarial examples based on different assumptions about the underlying input probability space.

Adversarial Robustness

Double Explore-then-Commit: Asymptotic Optimality and Beyond

no code implementations21 Feb 2020 Tianyuan Jin, Pan Xu, Xiaokui Xiao, Quanquan Gu

In this paper, we show that a variant of ETC algorithm can actually achieve the asymptotic optimality for multi-armed bandit problems as UCB-type algorithms do and extend it to the batched bandit setting.

A Generalized Neural Tangent Kernel Analysis for Two-layer Neural Networks

no code implementations NeurIPS 2020 Zixiang Chen, Yuan Cao, Quanquan Gu, Tong Zhang

In this paper, we provide a generalized neural tangent kernel analysis and show that noisy gradient descent with weight decay can still exhibit a "kernel-like" behavior.

Learning Theory

A Finite-Time Analysis of Q-Learning with Neural Network Function Approximation

no code implementations10 Dec 2019 Pan Xu, Quanquan Gu

Q-learning with neural network function approximation (neural Q-learning for short) is among the most prevalent deep reinforcement learning algorithms.

Q-Learning reinforcement-learning

Towards Understanding the Spectral Bias of Deep Learning

no code implementations3 Dec 2019 Yuan Cao, Zhiying Fang, Yue Wu, Ding-Xuan Zhou, Quanquan Gu

An intriguing phenomenon observed during training neural networks is the spectral bias, which states that neural networks are biased towards learning less complex functions.

Rank Aggregation via Heterogeneous Thurstone Preference Models

no code implementations3 Dec 2019 Tao Jin, Pan Xu, Quanquan Gu, Farzad Farnoud

By allowing different noise distributions, the proposed HTM model maintains the generality of Thurstone's original framework, and as such, also extends the Bradley-Terry-Luce (BTL) model for pairwise comparisons to heterogeneous populations of users.

Stochastic Gradient Hamiltonian Monte Carlo Methods with Recursive Variance Reduction

1 code implementation NeurIPS 2019 Difan Zou, Pan Xu, Quanquan Gu

Stochastic Gradient Hamiltonian Monte Carlo (SGHMC) algorithms have received increasing attention in both theory and practice.

How Much Over-parameterization Is Sufficient to Learn Deep ReLU Networks?

no code implementations ICLR 2021 Zixiang Chen, Yuan Cao, Difan Zou, Quanquan Gu

A recent line of research on deep learning focuses on the extremely over-parameterized setting, and shows that when the network width is larger than a high degree polynomial of the training sample size $n$ and the inverse of the target error $\epsilon^{-1}$, deep neural networks learned by (stochastic) gradient descent enjoy nice optimization and generalization guarantees.

Layer-Dependent Importance Sampling for Training Deep and Large Graph Convolutional Networks

1 code implementation NeurIPS 2019 Difan Zou, Ziniu Hu, Yewen Wang, Song Jiang, Yizhou Sun, Quanquan Gu

Original full-batch GCN training requires calculating the representation of all the nodes in the graph per GCN layer, which brings in high computation and memory costs.

Node Classification

Tight Sample Complexity of Learning One-hidden-layer Convolutional Neural Networks

no code implementations NeurIPS 2019 Yuan Cao, Quanquan Gu

We study the sample complexity of learning one-hidden-layer convolutional neural networks (CNNs) with non-overlapping filters.

Neural Contextual Bandits with UCB-based Exploration

2 code implementations ICML 2020 Dongruo Zhou, Lihong Li, Quanquan Gu

To the best of our knowledge, it is the first neural network-based contextual bandit algorithm with a near-optimal regret guarantee.

Efficient Exploration Multi-Armed Bandits

Laplacian Smoothing Stochastic Gradient Markov Chain Monte Carlo

1 code implementation2 Nov 2019 Bao Wang, Difan Zou, Quanquan Gu, Stanley Osher

As an important Markov Chain Monte Carlo (MCMC) method, stochastic gradient Langevin dynamics (SGLD) algorithm has achieved great success in Bayesian learning and posterior sampling.

Efficient Privacy-Preserving Stochastic Nonconvex Optimization

no code implementations30 Oct 2019 Lingxiao Wang, Bargav Jayaraman, David Evans, Quanquan Gu

While many solutions for privacy-preserving convex empirical risk minimization (ERM) have been developed, privacy-preserving nonconvex ERM remains a challenge.

Algorithm-Dependent Generalization Bounds for Overparameterized Deep Residual Networks

no code implementations NeurIPS 2019 Spencer Frei, Yuan Cao, Quanquan Gu

The skip-connections used in residual networks have become a standard architecture choice in deep learning due to the increased training stability and generalization performance with this architecture, although there has been limited theoretical understanding for this improvement.

Generalization Bounds

Training Deep Neural Networks with Partially Adaptive Momentum

no code implementations25 Sep 2019 Jinghui Chen, Dongruo Zhou, Yiqi Tang, Ziyan Yang, Yuan Cao, Quanquan Gu

Experiments on standard benchmarks show that our proposed algorithm can maintain fast convergence rate as Adam/Amsgrad while generalizing as well as SGD in training deep neural networks.

On the Dynamics and Convergence of Weight Normalization for Training Neural Networks

no code implementations25 Sep 2019 Yonatan Dukler, Quanquan Gu, Guido Montufar

We present a proof of convergence for ReLU networks trained with weight normalization.

NeuralUCB: Contextual Bandits with Neural Network-Based Exploration

no code implementations25 Sep 2019 Dongruo Zhou, Lihong Li, Quanquan Gu

To the best of our knowledge, our algorithm is the first neural network-based contextual bandit algorithm with near-optimal regret guarantee.

Efficient Exploration Multi-Armed Bandits

A Knowledge Transfer Framework for Differentially Private Sparse Learning

no code implementations13 Sep 2019 Lingxiao Wang, Quanquan Gu

We study the problem of estimating high dimensional models with underlying sparse structures while preserving the privacy of each training example.

Sparse Learning Transfer Learning

DP-LSSGD: A Stochastic Optimization Method to Lift the Utility in Privacy-Preserving ERM

1 code implementation28 Jun 2019 Bao Wang, Quanquan Gu, March Boedihardjo, Farzin Barekat, Stanley J. Osher

At the core of DP-LSSGD is the Laplacian smoothing, which smooths out the Gaussian noise used in the Gaussian mechanism.

Stochastic Optimization

An Improved Analysis of Training Over-parameterized Deep Neural Networks

no code implementations NeurIPS 2019 Difan Zou, Quanquan Gu

A recent line of research has shown that gradient-based algorithms with random initialization can converge to the global minima of the training loss for over-parameterized (i. e., sufficiently wide) deep neural networks.

Generalization Bounds of Stochastic Gradient Descent for Wide and Deep Neural Networks

no code implementations NeurIPS 2019 Yuan Cao, Quanquan Gu

We study the training and generalization of deep neural networks (DNNs) in the over-parameterized regime, where the network width (i. e., number of hidden nodes per layer) is much larger than the number of training data points.

Generalization Bounds

An Improved Convergence Analysis of Stochastic Variance-Reduced Policy Gradient

no code implementations29 May 2019 Pan Xu, Felicia Gao, Quanquan Gu

We revisit the stochastic variance-reduced policy gradient (SVRPG) method proposed by Papini et al. (2018) for reinforcement learning.

reinforcement-learning

Generalization Error Bounds of Gradient Descent for Learning Over-parameterized Deep ReLU Networks

no code implementations4 Feb 2019 Yuan Cao, Quanquan Gu

However, existing generalization error bounds are unable to explain the good generalization performance of over-parameterized DNNs.

Generalization Bounds

Lower Bounds for Smooth Nonconvex Finite-Sum Optimization

no code implementations31 Jan 2019 Dongruo Zhou, Quanquan Gu

We prove tight lower bounds for the complexity of finding $\epsilon$-suboptimal point and $\epsilon$-approximate stationary point in different settings, for a wide regime of the smallest eigenvalue of the Hessian of the objective function (or each component function).

Stochastic Recursive Variance-Reduced Cubic Regularization Methods

no code implementations31 Jan 2019 Dongruo Zhou, Quanquan Gu

Built upon SRVRC, we further propose a Hessian-free SRVRC algorithm, namely SRVRC$_{\text{free}}$, which only requires stochastic gradient and Hessian-vector product computations, and achieves $\tilde O(dn\epsilon^{-2} \land d\epsilon^{-3})$ runtime complexity, where $n$ is the number of component functions in the finite-sum structure, $d$ is the problem dimension, and $\epsilon$ is the optimization precision.

Stochastic Nested Variance Reduced Gradient Descent for Nonconvex Optimization

no code implementations NeurIPS 2018 Dongruo Zhou, Pan Xu, Quanquan Gu

We study finite-sum nonconvex optimization problems, where the objective function is an average of $n$ nonconvex functions.

Distributed Learning without Distress: Privacy-Preserving Empirical Risk Minimization

1 code implementation NeurIPS 2018 Bargav Jayaraman, Lingxiao Wang, David Evans, Quanquan Gu

We explore two popular methods of differential privacy, output perturbation and gradient perturbation, and advance the state-of-the-art for both methods in the distributed learning setting.

Third-order Smoothness Helps: Faster Stochastic Optimization Algorithms for Finding Local Minima

no code implementations NeurIPS 2018 Yaodong Yu, Pan Xu, Quanquan Gu

We propose stochastic optimization algorithms that can find local minima faster than existing algorithms for nonconvex optimization problems, by exploiting the third-order smoothness to escape non-degenerate saddle points more efficiently.

Stochastic Optimization

Sample Efficient Stochastic Variance-Reduced Cubic Regularization Method

no code implementations29 Nov 2018 Dongruo Zhou, Pan Xu, Quanquan Gu

The proposed algorithm achieves a lower sample complexity of Hessian matrix computation than existing cubic regularization based methods.

A Frank-Wolfe Framework for Efficient and Effective Adversarial Attacks

2 code implementations ICLR 2019 Jinghui Chen, Dongruo Zhou, Jin-Feng Yi, Quanquan Gu

Depending on how much information an adversary can access to, adversarial attacks can be classified as white-box attack and black-box attack.

Adversarial Attack

Stochastic Gradient Descent Optimizes Over-parameterized Deep ReLU Networks

no code implementations21 Nov 2018 Difan Zou, Yuan Cao, Dongruo Zhou, Quanquan Gu

In particular, we study the binary classification problem and show that for a broad family of loss functions, with proper random weight initialization, both gradient descent and stochastic gradient descent can find the global minima of the training loss for an over-parameterized deep ReLU network, under mild assumption on the training data.

On the Convergence of Adaptive Gradient Methods for Nonconvex Optimization

no code implementations16 Aug 2018 Dongruo Zhou, Jinghui Chen, Yuan Cao, Yiqi Tang, Ziyan Yang, Quanquan Gu

In this paper, we provide a fine-grained convergence analysis for a general class of adaptive gradient methods including AMSGrad, RMSProp and AdaGrad.

Continuous and Discrete-time Accelerated Stochastic Mirror Descent for Strongly Convex Functions

no code implementations ICML 2018 Pan Xu, Tianhao Wang, Quanquan Gu

We provide a second-order stochastic differential equation (SDE), which characterizes the continuous-time dynamics of accelerated stochastic mirror descent (ASMD) for strongly convex functions.

Stochastic Optimization

Covariate Adjusted Precision Matrix Estimation via Nonconvex Optimization

no code implementations ICML 2018 Jinghui Chen, Pan Xu, Lingxiao Wang, Jian Ma, Quanquan Gu

We propose a nonconvex estimator for the covariate adjusted precision matrix estimation problem in the high dimensional regime, under sparsity constraints.

A Primal-Dual Analysis of Global Optimality in Nonconvex Low-Rank Matrix Recovery

no code implementations ICML 2018 Xiao Zhang, Lingxiao Wang, Yaodong Yu, Quanquan Gu

We propose a primal-dual based framework for analyzing the global optimality of nonconvex low-rank matrix recovery.

Matrix Completion

Finding Local Minima via Stochastic Nested Variance Reduction

no code implementations22 Jun 2018 Dongruo Zhou, Pan Xu, Quanquan Gu

For general stochastic optimization problems, the proposed $\text{SNVRG}^{+}+\text{Neon2}^{\text{online}}$ achieves $\tilde{O}(\epsilon^{-3}+\epsilon_H^{-5}+\epsilon^{-2}\epsilon_H^{-3})$ gradient complexity, which is better than both $\text{SVRG}+\text{Neon2}^{\text{online}}$ (Allen-Zhu and Li, 2017) and Natasha2 (Allen-Zhu, 2017) in certain regimes.

Stochastic Optimization

Learning One-hidden-layer ReLU Networks via Gradient Descent

no code implementations20 Jun 2018 Xiao Zhang, Yaodong Yu, Lingxiao Wang, Quanquan Gu

We study the problem of learning one-hidden-layer neural networks with Rectified Linear Unit (ReLU) activation function, where the inputs are sampled from standard Gaussian distribution and the outputs are generated from a noisy teacher network.

Stochastic Nested Variance Reduction for Nonconvex Optimization

no code implementations NeurIPS 2018 Dongruo Zhou, Pan Xu, Quanquan Gu

We study finite-sum nonconvex optimization problems, where the objective function is an average of $n$ nonconvex functions.

Closing the Generalization Gap of Adaptive Gradient Methods in Training Deep Neural Networks

5 code implementations18 Jun 2018 Jinghui Chen, Dongruo Zhou, Yiqi Tang, Ziyan Yang, Yuan Cao, Quanquan Gu

Experiments on standard benchmarks show that our proposed algorithm can maintain a fast convergence rate as Adam/Amsgrad while generalizing as well as SGD in training deep neural networks.

Fast and Sample Efficient Inductive Matrix Completion via Multi-Phase Procrustes Flow

1 code implementation ICML 2018 Xiao Zhang, Simon S. Du, Quanquan Gu

We revisit the inductive matrix completion problem that aims to recover a rank-$r$ matrix with ambient dimension $d$ given $n$ features as the side prior information.

Matrix Completion

Stochastic Variance-Reduced Cubic Regularized Newton Method

no code implementations ICML 2018 Dongruo Zhou, Pan Xu, Quanquan Gu

At the core of our algorithm is a novel semi-stochastic gradient along with a semi-stochastic Hessian, which are specifically designed for cubic regularization method.

Stochastic Variance-Reduced Hamilton Monte Carlo Methods

no code implementations ICML 2018 Difan Zou, Pan Xu, Quanquan Gu

We propose a fast stochastic Hamilton Monte Carlo (HMC) method, for sampling from a smooth and strongly log-concave distribution.

Stochastic Optimization

Third-order Smoothness Helps: Even Faster Stochastic Optimization Algorithms for Finding Local Minima

no code implementations18 Dec 2017 Yaodong Yu, Pan Xu, Quanquan Gu

We propose stochastic optimization algorithms that can find local minima faster than existing algorithms for nonconvex optimization problems, by exploiting the third-order smoothness to escape non-degenerate saddle points more efficiently.

Stochastic Optimization

Saving Gradient and Negative Curvature Computations: Finding Local Minima More Efficiently

no code implementations11 Dec 2017 Yaodong Yu, Difan Zou, Quanquan Gu

We propose a family of nonconvex optimization algorithms that are able to save gradient and negative curvature computations to a large extent, and are guaranteed to find an approximate local minimum with improved runtime complexity.

Speeding Up Latent Variable Gaussian Graphical Model Estimation via Nonconvex Optimization

no code implementations NeurIPS 2017 Pan Xu, Jian Ma, Quanquan Gu

In order to speed up the estimation of the sparse plus low-rank components, we propose a sparsity constrained maximum likelihood estimator based on matrix factorization and an efficient alternating gradient descent algorithm with hard thresholding to solve it.

High-Dimensional Variance-Reduced Stochastic Gradient Expectation-Maximization Algorithm

no code implementations ICML 2017 Rongda Zhu, Lingxiao Wang, ChengXiang Zhai, Quanquan Gu

We apply our generic algorithm to two illustrative latent variable models: Gaussian mixture model and mixture of linear regression, and demonstrate the advantages of our algorithm by both theoretical analysis and numerical experiments.

A Unified Variance Reduction-Based Framework for Nonconvex Low-Rank Matrix Recovery

no code implementations ICML 2017 Lingxiao Wang, Xiao Zhang, Quanquan Gu

We propose a generic framework based on a new stochastic variance-reduced gradient descent algorithm for accelerating nonconvex low-rank matrix recovery.

Robust Gaussian Graphical Model Estimation with Arbitrary Corruption

no code implementations ICML 2017 Lingxiao Wang, Quanquan Gu

In particular, we show that provided that the number of corrupted samples $n_2$ for each variable satisfies $n_2 \lesssim \sqrt{n}/\sqrt{\log d}$, where $n$ is the sample size and $d$ is the number of variables, the proposed robust precision matrix estimator attains the same statistical rate as the standard estimator for Gaussian graphical models.

Model Selection Two-sample testing

Global Convergence of Langevin Dynamics Based Algorithms for Nonconvex Optimization

no code implementations NeurIPS 2018 Pan Xu, Jinghui Chen, Difan Zou, Quanquan Gu

Furthermore, for the first time we prove the global convergence guarantee for variance reduced stochastic gradient Langevin dynamics (SVRG-LD) to the almost minimizer within $\tilde O\big(\sqrt{n}d^5/(\lambda^4\epsilon^{5/2})\big)$ stochastic gradient evaluations, which outperforms the gradient complexities of GLD and SGLD in a wide regime.

Robust Wirtinger Flow for Phase Retrieval with Arbitrary Corruption

no code implementations20 Apr 2017 Jinghui Chen, Lingxiao Wang, Xiao Zhang, Quanquan Gu

We consider the robust phase retrieval problem of recovering the unknown signal from the magnitude-only measurements, where the measurements can be contaminated by both sparse arbitrary corruption and bounded random noise.

Speeding Up Latent Variable Gaussian Graphical Model Estimation via Nonconvex Optimizations

no code implementations NeurIPS 2017 Pan Xu, Jian Ma, Quanquan Gu

In order to speed up the estimation of the sparse plus low-rank components, we propose a sparsity constrained maximum likelihood estimator based on matrix factorization, and an efficient alternating gradient descent algorithm with hard thresholding to solve it.

A Unified Framework for Low-Rank plus Sparse Matrix Recovery

no code implementations21 Feb 2017 Xiao Zhang, Lingxiao Wang, Quanquan Gu

We propose a unified framework to solve general low-rank plus sparse matrix recovery problems based on matrix factorization, which covers a broad family of objective functions satisfying the restricted strong convexity and smoothness conditions.

A Universal Variance Reduction-Based Catalyst for Nonconvex Low-Rank Matrix Recovery

no code implementations9 Jan 2017 Lingxiao Wang, Xiao Zhang, Quanquan Gu

We propose a generic framework based on a new stochastic variance-reduced gradient descent algorithm for accelerating nonconvex low-rank matrix recovery.

Stochastic Variance-reduced Gradient Descent for Low-rank Matrix Recovery from Linear Measurements

no code implementations2 Jan 2017 Xiao Zhang, Lingxiao Wang, Quanquan Gu

And in the noiseless setting, our algorithm is guaranteed to linearly converge to the unknown low-rank matrix and achieves exact recovery with optimal sample complexity.

Communication-efficient Distributed Estimation and Inference for Transelliptical Graphical Models

no code implementations29 Dec 2016 Pan Xu, Lu Tian, Quanquan Gu

In detail, the proposed method distributes the $d$-dimensional data of size $N$ generated from a transelliptical graphical model into $m$ worker machines, and estimates the latent precision matrix on each worker machine based on the data of size $n=N/m$.

Semiparametric Differential Graph Models

no code implementations NeurIPS 2016 Pan Xu, Quanquan Gu

In many cases of network analysis, it is more attractive to study how a network varies under different conditions than an individual static network.

A Unified Computational and Statistical Framework for Nonconvex Low-Rank Matrix Estimation

no code implementations17 Oct 2016 Lingxiao Wang, Xiao Zhang, Quanquan Gu

In the general case with noisy observations, we show that our algorithm is guaranteed to linearly converge to the unknown low-rank matrix up to minimax optimal statistical error, provided an appropriate initial estimator.

Matrix Completion

Communication-efficient Distributed Sparse Linear Discriminant Analysis

no code implementations15 Oct 2016 Lu Tian, Quanquan Gu

We propose a communication-efficient distributed estimation method for sparse linear discriminant analysis (LDA) in the high dimensional regime.

Model Selection

High Dimensional Multivariate Regression and Precision Matrix Estimation via Nonconvex Optimization

no code implementations2 Jun 2016 Jinghui Chen, Quanquan Gu

We propose a nonconvex estimator for joint multivariate regression and precision matrix estimation in the high dimensional regime, under sparsity constraints.

Sharp Computational-Statistical Phase Transitions via Oracle Computational Model

no code implementations30 Dec 2015 Zhaoran Wang, Quanquan Gu, Han Liu

Based upon an oracle model of computation, which captures the interactions between algorithms and data, we establish a general lower bound that explicitly connects the minimum testing risk under computational budget constraints with the intrinsic probabilistic and combinatorial structures of statistical problems.

Two-sample testing

High Dimensional EM Algorithm: Statistical Optimization and Asymptotic Normality

no code implementations NeurIPS 2015 Zhaoran Wang, Quanquan Gu, Yang Ning, Han Liu

We provide a general theory of the expectation-maximization (EM) algorithm for inferring high dimensional latent variable models.

Towards Faster Rates and Oracle Property for Low-Rank Matrix Estimation

no code implementations18 May 2015 Huan Gui, Quanquan Gu

Moreover, we rigorously show that under a certain condition on the magnitude of the nonzero singular values, the proposed estimator enjoys oracle property (i. e., exactly recovers the true rank of the matrix), besides attaining a faster rate.

Matrix Completion

Statistical Limits of Convex Relaxations

no code implementations4 Mar 2015 Zhaoran Wang, Quanquan Gu, Han Liu

Many high dimensional sparse learning problems are formulated as nonconvex optimization.

Sparse Learning Stochastic Block Model

Local and Global Inference for High Dimensional Nonparanormal Graphical Models

no code implementations9 Feb 2015 Quanquan Gu, Yuan Cao, Yang Ning, Han Liu

Due to the presence of unknown marginal transformations, we propose a pseudo likelihood based inferential approach.

High Dimensional Expectation-Maximization Algorithm: Statistical Optimization and Asymptotic Normality

no code implementations30 Dec 2014 Zhaoran Wang, Quanquan Gu, Yang Ning, Han Liu

We provide a general theory of the expectation-maximization (EM) algorithm for inferring high dimensional latent variable models.

Sparse PCA with Oracle Property

no code implementations NeurIPS 2014 Quanquan Gu, Zhaoran Wang, Han Liu

In particular, under a weak assumption on the magnitude of the population projection matrix, one estimator within this family exactly recovers the true support with high probability, has exact rank-$k$, and attains a $\sqrt{s/n}$ statistical rate of convergence with $s$ being the subspace sparsity level and $n$ the sample size.

Robust Tensor Decomposition with Gross Corruption

no code implementations NeurIPS 2014 Quanquan Gu, Huan Gui, Jiawei Han

In this paper, we study the statistical performance of robust tensor decomposition with gross corruption.

Tensor Decomposition

Selective Labeling via Error Bound Minimization

no code implementations NeurIPS 2012 Quanquan Gu, Tong Zhang, Jiawei Han, Chris H. Ding

In particular, we derive a deterministic generalization error bound for LapRLS trained on subsampled data, and propose to select a subset of data points to label by minimizing this upper bound.

Generalized Fisher Score for Feature Selection

1 code implementation14 Feb 2012 Quanquan Gu, Zhenhui Li, Jiawei Han

Fisher score is one of the most widely used supervised feature selection methods.

Cannot find the paper you are looking for? You can Submit a new open access paper.