Search Results for author: Junchi Yang

Found 9 papers, 1 papers with code

Parameter-Agnostic Optimization under Relaxed Smoothness

no code implementations6 Nov 2023 Florian Hübler, Junchi Yang, Xiang Li, Niao He

However, as the assumption is relaxed to the more realistic $(L_0, L_1)$-smoothness, all existing convergence results still necessitate tuning of the stepsize.

Optimal Guarantees for Algorithmic Reproducibility and Gradient Complexity in Convex Optimization

no code implementations NeurIPS 2023 Liang Zhang, Junchi Yang, Amin Karbasi, Niao He

Particularly, given the inexact initialization oracle, our regularization-based algorithms achieve the best of both worlds - optimal reproducibility and near-optimal gradient complexity - for minimization and minimax optimization.

TiAda: A Time-scale Adaptive Algorithm for Nonconvex Minimax Optimization

no code implementations31 Oct 2022 Xiang Li, Junchi Yang, Niao He

Adaptive gradient methods have shown their ability to adjust the stepsizes on the fly in a parameter-agnostic manner, and empirically achieve faster convergence for solving minimization problems.

Nest Your Adaptive Algorithm for Parameter-Agnostic Nonconvex Minimax Optimization

no code implementations1 Jun 2022 Junchi Yang, Xiang Li, Niao He

Adaptive algorithms like AdaGrad and AMSGrad are successful in nonconvex optimization owing to their parameter-agnostic ability -- requiring no a priori knowledge about problem-specific parameters nor tuning of learning rates.

Faster Single-loop Algorithms for Minimax Optimization without Strong Concavity

1 code implementation10 Dec 2021 Junchi Yang, Antonio Orvieto, Aurelien Lucchi, Niao He

Gradient descent ascent (GDA), the simplest single-loop algorithm for nonconvex minimax optimization, is widely used in practical applications such as generative adversarial networks (GANs) and adversarial training.

The Complexity of Nonconvex-Strongly-Concave Minimax Optimization

no code implementations29 Mar 2021 Siqi Zhang, Junchi Yang, Cristóbal Guzmán, Negar Kiyavash, Niao He

In the averaged smooth finite-sum setting, our proposed algorithm improves over previous algorithms by providing a nearly-tight dependence on the condition number.

A Catalyst Framework for Minimax Optimization

no code implementations NeurIPS 2020 Junchi Yang, Siqi Zhang, Negar Kiyavash, Niao He

We introduce a generic \emph{two-loop} scheme for smooth minimax optimization with strongly-convex-concave objectives.

Global Convergence and Variance Reduction for a Class of Nonconvex-Nonconcave Minimax Problems

no code implementations NeurIPS 2020 Junchi Yang, Negar Kiyavash, Niao He

Nonconvex minimax problems appear frequently in emerging machine learning applications, such as generative adversarial networks and adversarial learning.

Global Convergence and Variance-Reduced Optimization for a Class of Nonconvex-Nonconcave Minimax Problems

no code implementations22 Feb 2020 Junchi Yang, Negar Kiyavash, Niao He

Nonconvex minimax problems appear frequently in emerging machine learning applications, such as generative adversarial networks and adversarial learning.

Cannot find the paper you are looking for? You can Submit a new open access paper.