no code implementations • 17 Jan 2024 • Kaan Ozkara, Can Karakus, Parameswaran Raman, Mingyi Hong, Shoham Sabach, Branislav Kveton, Volkan Cevher
Since Adam was introduced, several novel adaptive optimizers for deep learning have been proposed.
no code implementations • 5 Jan 2024 • Ruichen Jiang, Parameswaran Raman, Shoham Sabach, Aryan Mokhtari, Mingyi Hong, Volkan Cevher
In this paper, we introduce a novel subspace cubic regularized Newton method that achieves a dimension-independent global convergence rate of ${O}\left(\frac{1}{mk}+\frac{1}{k^2}\right)$ for solving convex optimization problems.
no code implementations • 9 Oct 2023 • Zuxin Liu, Jesse Zhang, Kavosh Asadi, Yao Liu, Ding Zhao, Shoham Sabach, Rasool Fakoor
Inspired by recent advancements in parameter-efficient fine-tuning in language domains, we explore efficient fine-tuning techniques -- e. g., Bottleneck Adapters, P-Tuning, and Low-Rank Adaptation (LoRA) -- in TAIL to adapt large pretrained models for new tasks with limited demonstration data.
no code implementations • 17 Jul 2023 • Roey Merchav, Shoham Sabach
In this paper, we propose the Bi-Sub-Gradient (Bi-SG) method, which is a generalization of the classical sub-gradient method to the setting of convex bi-level optimization problems.
no code implementations • 25 Oct 2022 • Dan Garber, Tsur Livney, Shoham Sabach
This paper considers a convex composite optimization problem with affine constraints, which includes problems that take the form of minimizing a smooth convex objective function over the intersection of (simple) convex sets, or regularized with multiple (simple) functions.
2 code implementations • 6 Apr 2019 • Mahesh Chandra Mukkamala, Peter Ochs, Thomas Pock, Shoham Sabach
Backtracking line-search is an old yet powerful strategy for finding a better step sizes to be used in proximal gradient algorithms.
no code implementations • 15 Feb 2018 • Dan Garber, Shoham Sabach, Atara Kaplan
Motivated by robust matrix recovery problems such as Robust Principal Component Analysis, we consider a general optimization problem of minimizing a smooth and strongly convex loss function applied to the sum of two blocks of variables, where each block of variables is constrained or regularized individually.
2 code implementations • 8 Feb 2017 • Thomas Pock, Shoham Sabach
In this paper we study nonconvex and nonsmooth optimization problems with semi-algebraic data, where the variables vector is split into several blocks of variables.
Optimization and Control