Search Results for author: Chris Junchi Li

Found 26 papers, 0 papers with code

Policy Optimization via Stochastic Recursive Gradient Algorithm

no code implementations ICLR 2019 Huizhuo Yuan, Chris Junchi Li, Yuhao Tang, Yuren Zhou

In this paper, we propose the StochAstic Recursive grAdient Policy Optimization (SARAPO) algorithm which is a novel variance reduction method on Trust Region Policy Optimization (TRPO).

Fast Decentralized Gradient Tracking for Federated Minimax Optimization with Local Updates

no code implementations7 May 2024 Chris Junchi Li

Federated learning (FL) for minimax optimization has emerged as a powerful paradigm for training models across distributed nodes/clients while preserving data privacy and model robustness on data heterogeneity.

Federated Learning

Accelerated Fully First-Order Methods for Bilevel and Minimax Optimization

no code implementations1 May 2024 Chris Junchi Li

This paper presents a new algorithm member for accelerating first-order methods for bilevel optimization, namely the \emph{(Perturbed) Restarted Accelerated Fully First-order methods for Bilevel Approximation}, abbreviated as \texttt{(P)RAF${}^2$BA}.

Bilevel Optimization Computational Efficiency

A General Continuous-Time Formulation of Stochastic ADMM and Its Variants

no code implementations22 Apr 2024 Chris Junchi Li

Stochastic versions of the alternating direction method of multiplier (ADMM) and its variants play a key role in many modern large-scale machine learning problems.

Accelerating Inexact HyperGradient Descent for Bilevel Optimization

no code implementations30 Jun 2023 Haikuo Yang, Luo Luo, Chris Junchi Li, Michael I. Jordan

We present a method for solving general nonconvex-strongly-convex bilevel optimization problems.

Bilevel Optimization

Nesterov Meets Optimism: Rate-Optimal Separable Minimax Optimization

no code implementations31 Oct 2022 Chris Junchi Li, Angela Yuan, Gauthier Gidel, Quanquan Gu, Michael I. Jordan

AG-OG is the first single-call algorithm with optimal convergence rates in both deterministic and stochastic settings for bilinearly coupled minimax optimization problems.

A General Framework for Sample-Efficient Function Approximation in Reinforcement Learning

no code implementations30 Sep 2022 Zixiang Chen, Chris Junchi Li, Angela Yuan, Quanquan Gu, Michael I. Jordan

With the increasing need for handling large state and action spaces, general function approximation has become a key technique in reinforcement learning (RL).

reinforcement-learning Reinforcement Learning (RL)

Learning Two-Player Mixture Markov Games: Kernel Function Approximation and Correlated Equilibrium

no code implementations10 Aug 2022 Chris Junchi Li, Dongruo Zhou, Quanquan Gu, Michael I. Jordan

We consider learning Nash equilibria in two-player zero-sum Markov Games with nonlinear function approximation, where the action-value function is approximated by a function in a Reproducing Kernel Hilbert Space (RKHS).

Optimal Extragradient-Based Bilinearly-Coupled Saddle-Point Optimization

no code implementations17 Jun 2022 Simon S. Du, Gauthier Gidel, Michael I. Jordan, Chris Junchi Li

We consider the smooth convex-concave bilinearly-coupled saddle-point problem, $\min_{\mathbf{x}}\max_{\mathbf{y}}~F(\mathbf{x}) + H(\mathbf{x},\mathbf{y}) - G(\mathbf{y})$, where one has access to stochastic first-order oracles for $F$, $G$ as well as the bilinear coupling function $H$.

Nonconvex Stochastic Scaled-Gradient Descent and Generalized Eigenvector Problems

no code implementations29 Dec 2021 Chris Junchi Li, Michael I. Jordan

Motivated by the problem of online canonical correlation analysis, we propose the \emph{Stochastic Scaled-Gradient Descent} (SSGD) algorithm for minimizing the expectation of a stochastic function over a generic Riemannian manifold.

On the Convergence of Stochastic Extragradient for Bilinear Games using Restarted Iteration Averaging

no code implementations30 Jun 2021 Chris Junchi Li, Yaodong Yu, Nicolas Loizou, Gauthier Gidel, Yi Ma, Nicolas Le Roux, Michael I. Jordan

We study the stochastic bilinear minimax optimization problem, presenting an analysis of the same-sample Stochastic ExtraGradient (SEG) method with constant step size, and presenting variations of the method that yield favorable convergence.

Stochastic Approximation for Online Tensorial Independent Component Analysis

no code implementations28 Dec 2020 Chris Junchi Li, Michael I. Jordan

For estimating one component, we provide a dynamics-based analysis to prove that our online tensorial ICA algorithm with a specific choice of stepsize achieves a sharp finite-sample error bound.

Dimensionality Reduction

ROOT-SGD: Sharp Nonasymptotics and Asymptotic Efficiency in a Single Algorithm

no code implementations28 Aug 2020 Chris Junchi Li, Wenlong Mou, Martin J. Wainwright, Michael. I. Jordan

We study the problem of solving strongly convex and smooth unconstrained optimization problems using stochastic first-order algorithms.

Stochastic Optimization Unity

On Linear Stochastic Approximation: Fine-grained Polyak-Ruppert and Non-Asymptotic Concentration

no code implementations9 Apr 2020 Wenlong Mou, Chris Junchi Li, Martin J. Wainwright, Peter L. Bartlett, Michael. I. Jordan

When the matrix $\bar{A}$ is Hurwitz, we prove a central limit theorem (CLT) for the averaged iterates with fixed step size and number of iterations going to infinity.

Stochastic Modified Equations for Continuous Limit of Stochastic ADMM

no code implementations7 Mar 2020 Xiang Zhou, Huizhuo Yuan, Chris Junchi Li, Qingyun Sun

In this work, we put different variants of stochastic ADMM into a unified form, which includes standard, linearized and gradient-based ADMM with relaxation, and study their dynamics via a continuous-time model approach.

SPIDER: Near-Optimal Non-Convex Optimization via Stochastic Path-Integrated Differential Estimator

no code implementations NeurIPS 2018 Cong Fang, Chris Junchi Li, Zhouchen Lin, Tong Zhang

Specially, we prove that the SPIDER-SFO algorithm achieves a gradient computation cost of $\mathcal{O}\left( \min( n^{1/2} \epsilon^{-2}, \epsilon^{-3} ) \right)$ to find an $\epsilon$-approximate first-order stationary point.

Stochastic Optimization

A note on concentration inequality for vector-valued martingales with weak exponential-type tails

no code implementations6 Sep 2018 Chris Junchi Li

We present novel martingale concentration inequalities for martingale differences with finite Orlicz-$\psi_\alpha$ norms.

Dimensionality Reduction LEMMA

Diffusion Approximations for Online Principal Component Estimation and Global Convergence

no code implementations NeurIPS 2017 Chris Junchi Li, Mengdi Wang, Han Liu, Tong Zhang

In this paper, we propose to adopt the diffusion approximation tools to study the dynamics of Oja's iteration which is an online stochastic gradient descent method for the principal component analysis.

Online ICA: Understanding Global Dynamics of Nonconvex Optimization via Diffusion Processes

no code implementations NeurIPS 2016 Chris Junchi Li, Zhaoran Wang, Han Liu

Despite the empirical success of nonconvex statistical optimization methods, their global dynamics, especially convergence to the desirable local minima, remain less well understood in theory.

Tensor Decomposition

SPIDER: Near-Optimal Non-Convex Optimization via Stochastic Path Integrated Differential Estimator

no code implementations NeurIPS 2018 Cong Fang, Chris Junchi Li, Zhouchen Lin, Tong Zhang

For stochastic first-order method, combining SPIDER with normalized gradient descent, we propose two new algorithms, namely SPIDER-SFO and SPIDER-SFO\textsuperscript{+}, that solve non-convex stochastic optimization problems using stochastic gradients only.

Stochastic Optimization

A convergence analysis of the perturbed compositional gradient flow: averaging principle and normal deviations

no code implementations2 Sep 2017 Wenqing Hu, Chris Junchi Li

By introducing a separation of fast and slow scales of the two equations, we show that the limit of the slow motion is given by an averaged ordinary differential equation.

On the diffusion approximation of nonconvex stochastic gradient descent

no code implementations22 May 2017 Wenqing Hu, Chris Junchi Li, Lei LI, Jian-Guo Liu

In addition, we discuss the effects of batch size for the deep neural networks, and we find that small batch size is helpful for SGD algorithms to escape unstable stationary points and sharp minimizers.

Near-Optimal Stochastic Approximation for Online Principal Component Estimation

no code implementations16 Mar 2016 Chris Junchi Li, Mengdi Wang, Han Liu, Tong Zhang

We prove for the first time a nearly optimal finite-sample error bound for the online PCA algorithm.

Cannot find the paper you are looking for? You can Submit a new open access paper.