Search Results for author: Atara Kaplan

Found 6 papers, 0 papers with code

Efficiency of First-Order Methods for Low-Rank Tensor Recovery with the Tensor Nuclear Norm Under Strict Complementarity

no code implementations3 Aug 2023 Dan Garber, Atara Kaplan

For a smooth objective function, when initialized in certain proximity of an optimal solution which satisfies SC, standard projected gradient methods only require SVD computations (for projecting onto the tensor nuclear norm ball) of rank that matches the tubal rank of the optimal solution.

Low-Rank Mirror-Prox for Nonsmooth and Low-Rank Matrix Optimization Problems

no code implementations23 Jun 2022 Dan Garber, Atara Kaplan

Low-rank and nonsmooth matrix optimization problems capture many fundamental tasks in statistics and machine learning.

Low-Rank Extragradient Method for Nonsmooth and Low-Rank Matrix Optimization Problems

no code implementations NeurIPS 2021 Dan Garber, Atara Kaplan

Low-rank and nonsmooth matrix optimization problems capture many fundamental tasks in statistics and machine learning.

On the Efficient Implementation of the Matrix Exponentiated Gradient Algorithm for Low-Rank Matrix Optimization

no code implementations18 Dec 2020 Dan Garber, Atara Kaplan

In this work we propose an efficient implementations of MEG, both with deterministic and stochastic gradients, which are tailored for optimization with low-rank matrices, and only use a single low-rank SVD computation on each iteration.

Fast Stochastic Algorithms for Low-rank and Nonsmooth Matrix Problems

no code implementations27 Sep 2018 Dan Garber, Atara Kaplan

However, such problems are highly challenging to solve in large-scale: the low-rank promoting term prohibits efficient implementations of proximal methods for composite optimization and even simple subgradient methods.

Stochastic Optimization

Improved Complexities of Conditional Gradient-Type Methods with Applications to Robust Matrix Recovery Problems

no code implementations15 Feb 2018 Dan Garber, Shoham Sabach, Atara Kaplan

Motivated by robust matrix recovery problems such as Robust Principal Component Analysis, we consider a general optimization problem of minimizing a smooth and strongly convex loss function applied to the sum of two blocks of variables, where each block of variables is constrained or regularized individually.

Cannot find the paper you are looking for? You can Submit a new open access paper.