Search Results for author: Shoham Sabach

Found 8 papers, 2 papers with code

MADA: Meta-Adaptive Optimizers through hyper-gradient Descent

no code implementations17 Jan 2024 Kaan Ozkara, Can Karakus, Parameswaran Raman, Mingyi Hong, Shoham Sabach, Branislav Kveton, Volkan Cevher

Since Adam was introduced, several novel adaptive optimizers for deep learning have been proposed.

Krylov Cubic Regularized Newton: A Subspace Second-Order Method with Dimension-Free Convergence Rate

no code implementations5 Jan 2024 Ruichen Jiang, Parameswaran Raman, Shoham Sabach, Aryan Mokhtari, Mingyi Hong, Volkan Cevher

In this paper, we introduce a novel subspace cubic regularized Newton method that achieves a dimension-independent global convergence rate of ${O}\left(\frac{1}{mk}+\frac{1}{k^2}\right)$ for solving convex optimization problems.

Second-order methods

TAIL: Task-specific Adapters for Imitation Learning with Large Pretrained Models

no code implementations9 Oct 2023 Zuxin Liu, Jesse Zhang, Kavosh Asadi, Yao Liu, Ding Zhao, Shoham Sabach, Rasool Fakoor

Inspired by recent advancements in parameter-efficient fine-tuning in language domains, we explore efficient fine-tuning techniques -- e. g., Bottleneck Adapters, P-Tuning, and Low-Rank Adaptation (LoRA) -- in TAIL to adapt large pretrained models for new tasks with limited demonstration data.

Continual Learning Imitation Learning

Convex Bi-Level Optimization Problems with Non-smooth Outer Objective Function

no code implementations17 Jul 2023 Roey Merchav, Shoham Sabach

In this paper, we propose the Bi-Sub-Gradient (Bi-SG) method, which is a generalization of the classical sub-gradient method to the setting of convex bi-level optimization problems.

Faster Projection-Free Augmented Lagrangian Methods via Weak Proximal Oracle

no code implementations25 Oct 2022 Dan Garber, Tsur Livney, Shoham Sabach

This paper considers a convex composite optimization problem with affine constraints, which includes problems that take the form of minimizing a smooth convex objective function over the intersection of (simple) convex sets, or regularized with multiple (simple) functions.

Convex-Concave Backtracking for Inertial Bregman Proximal Gradient Algorithms in Non-Convex Optimization

2 code implementations6 Apr 2019 Mahesh Chandra Mukkamala, Peter Ochs, Thomas Pock, Shoham Sabach

Backtracking line-search is an old yet powerful strategy for finding a better step sizes to be used in proximal gradient algorithms.

Improved Complexities of Conditional Gradient-Type Methods with Applications to Robust Matrix Recovery Problems

no code implementations15 Feb 2018 Dan Garber, Shoham Sabach, Atara Kaplan

Motivated by robust matrix recovery problems such as Robust Principal Component Analysis, we consider a general optimization problem of minimizing a smooth and strongly convex loss function applied to the sum of two blocks of variables, where each block of variables is constrained or regularized individually.

Inertial Proximal Alternating Linearized Minimization (iPALM) for Nonconvex and Nonsmooth Problems

2 code implementations8 Feb 2017 Thomas Pock, Shoham Sabach

In this paper we study nonconvex and nonsmooth optimization problems with semi-algebraic data, where the variables vector is split into several blocks of variables.

Optimization and Control

Cannot find the paper you are looking for? You can Submit a new open access paper.