Search Results for author: Rachael Tappenden

Found 7 papers, 0 papers with code

Gradient Descent and the Power Method: Exploiting their connection to find the leftmost eigen-pair and escape saddle points

no code implementations2 Nov 2022 Rachael Tappenden, Martin Takáč

This work shows that applying Gradient Descent (GD) with a fixed step size to minimize a (possibly nonconvex) quadratic function is equivalent to running the Power Method (PM) on the gradients.

Stochastic Gradient Methods with Preconditioned Updates

no code implementations1 Jun 2022 Abdurakhmon Sadiev, Aleksandr Beznosikov, Abdulla Jasem Almansoori, Dmitry Kamzolov, Rachael Tappenden, Martin Takáč

There are several algorithms for such problems, but existing methods often work poorly when the problem is badly scaled and/or ill-conditioned, and a primary goal of this work is to introduce methods that alleviate this issue.

Fast and Safe: Accelerated gradient methods with optimality certificates and underestimate sequences

no code implementations10 Oct 2017 Majid Jahani, Naga Venkata C. Gudapati, Chenxin Ma, Rachael Tappenden, Martin Takáč

In this work we introduce the concept of an Underestimate Sequence (UES), which is motivated by Nesterov's estimate sequence.

Linear Convergence of the Randomized Feasible Descent Method Under the Weak Strong Convexity Assumption

no code implementations8 Jun 2015 Chenxin Ma, Rachael Tappenden, Martin Takáč

We show that the famous SDCA algorithm for optimizing the SVM dual problem, or the stochastic coordinate descent method for the LASSO problem, fits into the framework of RC-FDM.

Separable Approximations and Decomposition Methods for the Augmented Lagrangian

no code implementations30 Aug 2013 Rachael Tappenden, Peter Richtarik, Burak Buke

In this paper we study decomposition methods based on separable approximations for minimizing the augmented Lagrangian.

Inexact Coordinate Descent: Complexity and Preconditioning

no code implementations19 Apr 2013 Rachael Tappenden, Peter Richtárik, Jacek Gondzio

In this paper we consider the problem of minimizing a convex function using a randomized block coordinate descent method.

Cannot find the paper you are looking for? You can Submit a new open access paper.