Search Results for author: Yu-Hong Dai

Found 7 papers, 0 papers with code

Zeroth-Order Alternating Gradient Descent Ascent Algorithms for a Class of Nonconvex-Nonconcave Minimax Problems

no code implementations24 Nov 2022 Zi Xu, Zi-Qi Wang, Jun-Lin Wang, Yu-Hong Dai

In this paper, we consider a class of nonconvex-nonconcave minimax problems, i. e., NC-PL minimax problems, whose objective functions satisfy the Polyak-\L ojasiewicz (PL) condition with respect to the inner variable.

An efficient linear programming rounding-and-refinement algorithm for large-scale network slicing problem

no code implementations4 Feb 2021 Wei-Kun Chen, Ya-Feng Liu, Yu-Hong Dai, Zhi-Quan Luo

In this paper, we consider the network slicing problem which attempts to map multiple customized virtual network requests (also called services) to a common shared network infrastructure and allocate network resources to meet diverse service requirements, and propose an efficient two-stage algorithm for solving this NP-hard problem.

Networking and Internet Architecture Information Theory Signal Processing Information Theory Optimization and Control

Majorized Semi-proximal Alternating Coordinate Method for Nonsmooth Convex-Concave Minimax Optimization

no code implementations21 Dec 2020 Yu-Hong Dai, Jiani Wang, Liwei Zhang

While there have been many numerical algorithms for solving smooth convex-concave minimax problems, numerical algorithms for nonsmooth convex-concave minimax problems are very rare.

Optimization and Control 90C30

A variable metric mini-batch proximal stochastic recursive gradient algorithm with diagonal Barzilai-Borwein stepsize

no code implementations2 Oct 2020 Tengteng Yu, Xin-Wei Liu, Yu-Hong Dai, Jie Sun

In this paper, we propose a variable metric mini-batch proximal stochastic recursive gradient algorithm VM-mSRGBB, which updates the metric using a new diagonal BB stepsize.

Training GANs with Centripetal Acceleration

no code implementations24 Feb 2019 Wei Peng, Yu-Hong Dai, HUI ZHANG, Li-Zhi Cheng

Training generative adversarial networks (GANs) often suffers from cyclic behaviors of iterates.

Barzilai-Borwein Step Size for Stochastic Gradient Descent

no code implementations NeurIPS 2016 Conghui Tan, Shiqian Ma, Yu-Hong Dai, Yuqiu Qian

One of the major issues in stochastic gradient descent (SGD) methods is how to choose an appropriate step size while running the algorithm.

Stochastic Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.