Quadratically Regularized Subgradient Methods for Weakly Convex Optimization with Weakly Convex Constraints

ICML 2020  ·  Runchao Ma, Qihang Lin, Tianbao Yang ·

Optimization models with non-convex constraints arise in many tasks in machine learning, e.g., learning with fairness constraints or Neyman-Pearson classification with non-convex loss. Although many efficient methods have been developed with theoretical convergence guarantees for non-convex unconstrained problems, it remains a challenge to design provably efficient algorithms for problems with non-convex functional constraints. This paper proposes a class of subgradient methods for constrained optimization where the objective function and the constraint functions are weakly convex and nonsmooth. Our methods solve a sequence of strongly convex subproblems, where a quadratic regularization term is added to both the objective function and each constraint function. Each subproblem can be solved by various algorithms for strongly convex optimization. Under a uniform Slater's condition, we establish the computation complexities of our methods for finding a nearly stationary point.

PDF Abstract ICML 2020 PDF
No code implementations yet. Submit your code now

Categories


Optimization and Control