Search Results for author: Kfir Y. Levy

Found 13 papers, 2 papers with code

On the Global Convergence of Policy Gradient in Average Reward Markov Decision Processes

no code implementations11 Mar 2024 Navdeep Kumar, Yashaswini Murthy, Itai Shufaro, Kfir Y. Levy, R. Srikant, Shie Mannor

We present the first finite time global convergence analysis of policy gradient in the context of infinite horizon average reward Markov decision processes (MDPs).

Dynamic Byzantine-Robust Learning: Adapting to Switching Byzantine Workers

no code implementations5 Feb 2024 Ron Dorfman, Naseem Yehya, Kfir Y. Levy

Byzantine-robust learning has emerged as a prominent fault-tolerant distributed machine learning framework.

$μ^2$-SGD: Stable Stochastic Optimization via a Double Momentum Mechanism

no code implementations9 Apr 2023 Kfir Y. Levy

We consider stochastic convex optimization problems where the objective is an expectation over smooth functions.

Stochastic Optimization

SLowcal-SGD: Slow Query Points Improve Local-SGD for Stochastic Convex Optimization

no code implementations9 Apr 2023 Kfir Y. Levy

We consider distributed learning scenarios where M machines interact with a parameter server along several communication rounds in order to minimize a joint objective function.

DoCoFL: Downlink Compression for Cross-Device Federated Learning

no code implementations1 Feb 2023 Ron Dorfman, Shay Vargaftik, Yaniv Ben-Itzhak, Kfir Y. Levy

Many compression techniques have been proposed to reduce the communication overhead of Federated Learning training procedures.

Federated Learning

Online Meta-Learning in Adversarial Multi-Armed Bandits

no code implementations31 May 2022 Ilya Osadchiy, Kfir Y. Levy, Ron Meir

This solution comprises an inner learner that plays each episode separately, and an outer learner that updates the hyper-parameters of the inner algorithm between the episodes.

Meta-Learning Multi-Armed Bandits

Adapting to Mixing Time in Stochastic Optimization with Markovian Data

1 code implementation9 Feb 2022 Ron Dorfman, Kfir Y. Levy

We consider stochastic optimization problems where data is drawn from a Markov chain.

Stochastic Optimization

Robust Linear Regression for General Feature Distribution

no code implementations4 Feb 2022 Tom Norman, Nir Weinberger, Kfir Y. Levy

In this work we go beyond these assumptions and investigate robust regression under a more general set of assumptions: $\textbf{(i)}$ we allow the covariance matrix to be either positive definite or positive semi definite, $\textbf{(ii)}$ we do not necessarily assume that the features are centered, $\textbf{(iii)}$ we make no further assumption beyond boundedness (sub-Gaussianity) of features and measurement noise.

regression

No-Regret Dynamics in the Fenchel Game: A Unified Framework for Algorithmic Convex Optimization

no code implementations22 Nov 2021 Jun-Kun Wang, Jacob Abernethy, Kfir Y. Levy

We develop an algorithmic framework for solving convex optimization problems using no-regret game dynamics.

STORM+: Fully Adaptive SGD with Momentum for Nonconvex Optimization

no code implementations1 Nov 2021 Kfir Y. Levy, Ali Kavis, Volkan Cevher

In this work we propose STORM+, a new method that is completely parameter-free, does not require large batch-sizes, and obtains the optimal $O(1/T^{1/3})$ rate for finding an approximate stationary point.

Learning Under Delayed Feedback: Implicitly Adapting to Gradient Delays

no code implementations23 Jun 2021 Rotem Zamir Aviv, Ido Hakimi, Assaf Schuster, Kfir Y. Levy

We consider stochastic convex optimization problems, where several machines act asynchronously in parallel while sharing a common memory.

Generative Minimization Networks: Training GANs Without Competition

no code implementations23 Mar 2021 Paulina Grnarova, Yannic Kilcher, Kfir Y. Levy, Aurelien Lucchi, Thomas Hofmann

Among known problems experienced by practitioners is the lack of convergence guarantees or convergence to a non-optimum cycle.

Faster Neural Network Training with Approximate Tensor Operations

1 code implementation NeurIPS 2021 Menachem Adelman, Kfir Y. Levy, Ido Hakimi, Mark Silberstein

We propose a novel technique for faster deep neural network training which systematically applies sample-based approximation to the constituent tensor operations, i. e., matrix multiplications and convolutions.

Cannot find the paper you are looking for? You can Submit a new open access paper.