Stochastic quasi-Newton methods for non-strongly convex problems: convergence and rate analysis

15 Mar 2016  ·  Farzad Yousefian, Angelia Nedić, Uday V. Shanbha ·

Motivated by applications in optimization and machine learning, we consider stochastic quasi-Newton (SQN) methods for solving stochastic optimization problems. In the literature, the convergence analysis of these algorithms relies on strong convexity of the objective function. To our knowledge, no theoretical analysis is provided for the rate statements in the absence of this assumption. Motivated by this gap, we allow the objective function to be merely convex and we develop a cyclic regularized SQN method where the gradient mapping and the Hessian approximation matrix are both regularized at each iteration and are updated in a cyclic manner. We show that, under suitable assumptions on the stepsize and regularization parameters, the objective function value converges to the optimal objective function of the original problem in both almost sure and the expected senses. For each case, a class of feasible sequences that guarantees the convergence is provided. Moreover, the rate of convergence in terms of the objective function value is derived. Our empirical analysis on a binary classification problem shows that the proposed scheme performs well compared to both classic regularization SQN schemes and stochastic approximation method.

PDF Abstract
No code implementations yet. Submit your code now

Categories


Optimization and Control