Search Results for author: Luo Luo

Found 26 papers, 2 papers with code

Lower Complexity Bounds for Finite-Sum Convex-Concave Minimax Optimization Problems

no code implementations ICML 2020 Guangzeng Xie, Luo Luo, Yijiang Lian, Zhihua Zhang

This paper studies the lower bound complexity for minimax optimization problem whose objective function is the average of $n$ individual smooth convex-concave functions.

Incremental Quasi-Newton Methods with Faster Superlinear Convergence Rates

no code implementations4 Feb 2024 Zhuanghua Liu, Luo Luo, Bryan Kian Hsiang Low

The recently proposed incremental quasi-Newton method is based on BFGS update and achieves a local superlinear convergence rate that is dependent on the condition number of the problem.

On the Complexity of Finite-Sum Smooth Optimization under the Polyak-Łojasiewicz Condition

no code implementations4 Feb 2024 Yunyan Bai, Yuxing Liu, Luo Luo

This paper considers the optimization problem of the form $\min_{{\bf x}\in{\mathbb R}^d} f({\bf x})\triangleq \frac{1}{n}\sum_{i=1}^n f_i({\bf x})$, where $f(\cdot)$ satisfies the Polyak--{\L}ojasiewicz (PL) condition with parameter $\mu$ and $\{f_i(\cdot)\}_{i=1}^n$ is $L$-mean-squared smooth.

Faster Stochastic Algorithms for Minimax Optimization under Polyak--Łojasiewicz Conditions

1 code implementation29 Jul 2023 Lesi Chen, Boyuan Yao, Luo Luo

We prove SPIDER-GDA could find an $\epsilon$-optimal solution within ${\mathcal O}\left((n + \sqrt{n}\,\kappa_x\kappa_y^2)\log (1/\epsilon)\right)$ stochastic first-order oracle (SFO) complexity, which is better than the state-of-the-art method whose SFO upper bound is ${\mathcal O}\big((n + n^{2/3}\kappa_x\kappa_y^2)\log (1/\epsilon)\big)$, where $\kappa_x\triangleq L/\mu_x$ and $\kappa_y\triangleq L/\mu_y$.

Accelerating Inexact HyperGradient Descent for Bilevel Optimization

no code implementations30 Jun 2023 Haikuo Yang, Luo Luo, Chris Junchi Li, Michael I. Jordan

We present a method for solving general nonconvex-strongly-convex bilevel optimization problems.

Bilevel Optimization

Faster Gradient-Free Algorithms for Nonsmooth Nonconvex Stochastic Optimization

no code implementations16 Jan 2023 Lesi Chen, Jing Xu, Luo Luo

We consider the optimization problem of the form $\min_{x \in \mathbb{R}^d} f(x) \triangleq \mathbb{E}_{\xi} [F(x; \xi)]$, where the component $F(x;\xi)$ is $L$-mean-squared Lipschitz but possibly nonconvex and nonsmooth.

Stochastic Optimization

An Efficient Stochastic Algorithm for Decentralized Nonconvex-Strongly-Concave Minimax Optimization

no code implementations5 Dec 2022 Lesi Chen, Haishan Ye, Luo Luo

This paper studies the stochastic optimization for decentralized nonconvex-strongly-concave (NC-SC) minimax problems over a multi-agent network.

Stochastic Optimization

An Optimal Stochastic Algorithm for Decentralized Nonconvex Finite-sum Optimization

no code implementations25 Oct 2022 Luo Luo, Haishan Ye

This paper studies the decentralized nonconvex optimization problem $\min_{x\in{\mathbb R}^d} f(x)\triangleq \frac{1}{m}\sum_{i=1}^m f_i(x)$, where $f_i(x)\triangleq \frac{1}{n}\sum_{j=1}^n f_{i, j}(x)$ is the local function on the $i$-th agent of the network.

Near-Optimal Algorithms for Making the Gradient Small in Stochastic Minimax Optimization

1 code implementation11 Aug 2022 Lesi Chen, Luo Luo

We show that the RAIN achieves near-optimal stochastic first-order oracle (SFO) complexity for stochastic minimax optimization in both convex-concave and strongly-convex-strongly-concave cases.

Stochastic Optimization

Decentralized Stochastic Variance Reduced Extragradient Method

no code implementations1 Feb 2022 Luo Luo, Haishan Ye

This paper studies decentralized convex-concave minimax optimization problems of the form $\min_x\max_y f(x, y) \triangleq\frac{1}{m}\sum_{i=1}^m f_i(x, y)$, where $m$ is the number of agents and each local function can be written as $f_i(x, y)=\frac{1}{n}\sum_{j=1}^n f_{i, j}(x, y)$.

Quasi-Newton Methods for Saddle Point Problems and Beyond

no code implementations4 Nov 2021 Chengchang Liu, Luo Luo

This paper studies quasi-Newton methods for solving strongly-convex-strongly-concave saddle point problems (SPP).

Finding Second-Order Stationary Points in Nonconvex-Strongly-Concave Minimax Optimization

no code implementations10 Oct 2021 Luo Luo, YuJun Li, Cheng Chen

In this paper, we propose a novel approach for minimax optimization, called Minimax Cubic Newton (MCN), which could find an $\big(\varepsilon,\kappa^{1. 5}\sqrt{\rho\varepsilon}\,\big)$-second-order stationary point of $P({\bf x})$ with calling ${\mathcal O}\big(\kappa^{1. 5}\sqrt{\rho}\varepsilon^{-1. 5}\big)$ times of second-order oracles and $\tilde{\mathcal O}\big(\kappa^{2}\sqrt{\rho}\varepsilon^{-1. 5}\big)$ times of first-order oracles, where $\kappa$ is the condition number and $\rho$ is the Lipschitz continuous constant for the Hessian of $f({\bf x},{\bf y})$.

Near Optimal Stochastic Algorithms for Finite-Sum Unbalanced Convex-Concave Minimax Optimization

no code implementations3 Jun 2021 Luo Luo, Guangzeng Xie, Tong Zhang, Zhihua Zhang

This paper considers stochastic first-order algorithms for convex-concave minimax problems of the form $\min_{\bf x}\max_{\bf y}f(\bf x, \bf y)$, where $f$ can be presented by the average of $n$ individual components which are $L$-average smooth.

Decentralized Accelerated Proximal Gradient Descent

no code implementations NeurIPS 2020 Haishan Ye, Ziang Zhou, Luo Luo, Tong Zhang

In this paper, we propose a new method which establishes the optimal computational complexity and a near optimal communication complexity.

BIG-bench Machine Learning

Efficient Projection-Free Algorithms for Saddle Point Problems

no code implementations NeurIPS 2020 Cheng Chen, Luo Luo, Weinan Zhang, Yong Yu

The Frank-Wolfe algorithm is a classic method for constrained optimization problems.

Multi-consensus Decentralized Accelerated Gradient Descent

no code implementations2 May 2020 Haishan Ye, Luo Luo, Ziang Zhou, Tong Zhang

This paper considers the decentralized convex optimization problem, which has a wide range of applications in large-scale machine learning, sensor networks, and control theory.

BIG-bench Machine Learning

Stochastic Recursive Gradient Descent Ascent for Stochastic Nonconvex-Strongly-Concave Minimax Problems

no code implementations NeurIPS 2020 Luo Luo, Haishan Ye, Zhichao Huang, Tong Zhang

We consider nonconvex-concave minimax optimization problems of the form $\min_{\bf x}\max_{\bf y\in{\mathcal Y}} f({\bf x},{\bf y})$, where $f$ is strongly-concave in $\bf y$ but possibly nonconvex in $\bf x$ and ${\mathcal Y}$ is a convex and compact set.

A Novel Analysis Framework of Lower Complexity Bounds for Finite-Sum Optimization

no code implementations25 Sep 2019 Guangzeng Xie, Luo Luo, Zhihua Zhang

This paper studies the lower bound complexity for the optimization problem whose objective function is the average of $n$ individual smooth convex functions.

A Stochastic Proximal Point Algorithm for Saddle-Point Problems

no code implementations13 Sep 2019 Luo Luo, Cheng Chen, Yu-Jun Li, Guangzeng Xie, Zhihua Zhang

We consider saddle point problems which objective functions are the average of $n$ strongly convex-concave individual components.

A General Analysis Framework of Lower Complexity Bounds for Finite-Sum Optimization

no code implementations22 Aug 2019 Guangzeng Xie, Luo Luo, Zhihua Zhang

This paper studies the lower bound complexity for the optimization problem whose objective function is the average of $n$ individual smooth convex functions.

Approximate Newton Methods and Their Local Convergence

no code implementations ICML 2017 Haishan Ye, Luo Luo, Zhihua Zhang

We propose a unifying framework to analyze local convergence properties of second order methods.

Second-order methods

Robust Frequent Directions with Application in Online Learning

no code implementations15 May 2017 Luo Luo, Cheng Chen, Zhihua Zhang, Wu-Jun Li, Tong Zhang

We also apply RFD to online learning and propose an effective hyperparameter-free online Newton algorithm.

Communication Lower Bounds for Distributed Convex Optimization: Partition Data on Features

no code implementations2 Dec 2016 Zihao Chen, Luo Luo, Zhihua Zhang

Recently, there has been an increasing interest in designing distributed convex optimization algorithms under the setting where the data matrix is partitioned on features.

A Proximal Stochastic Quasi-Newton Algorithm

no code implementations31 Jan 2016 Luo Luo, Zihao Chen, Zhihua Zhang, Wu-Jun Li

It incorporates the Hessian in the smooth part of the function and exploits multistage scheme to reduce the variance of the stochastic gradient.

SPSD Matrix Approximation vis Column Selection: Theories, Algorithms, and Extensions

no code implementations22 Jun 2014 Shusen Wang, Luo Luo, Zhihua Zhang

In this paper we conduct in-depth studies of an SPSD matrix approximation model and establish strong relative-error bounds.

Cannot find the paper you are looking for? You can Submit a new open access paper.