Paper

Weakly-Convex Concave Min-Max Optimization: Provable Algorithms and Applications in Machine Learning

Min-max problems have broad applications in machine learning, including learning with non-decomposable loss and learning with robustness to data distribution. Convex-concave min-max problem is an active topic of research with efficient algorithms and sound theoretical foundations developed. However, it remains a challenge to design provably efficient algorithms for non-convex min-max problems with or without smoothness. In this paper, we study a family of non-convex min-max problems, whose objective function is weakly convex in the variables of minimization and is concave in the variables of maximization. We propose a proximally guided stochastic subgradient method and a proximally guided stochastic variance-reduced method for the non-smooth and smooth instances, respectively, in this family of problems. We analyze the time complexities of the proposed methods for finding a nearly stationary point of the outer minimization problem corresponding to the min-max problem.

Results in Papers With Code
(↓ scroll down to see all results)