Search Results for author: Kfir. Y. Levy

Found 19 papers, 7 papers with code

UniXGrad: A Universal, Adaptive Algorithm with Optimal Guarantees for Constrained Optimization

no code implementations NeurIPS 2019 Ali Kavis, Kfir. Y. Levy, Francis Bach, Volkan Cevher

To the best of our knowledge, this is the first adaptive, unified algorithm that achieves the optimal rates in the constrained setting.

Adaptive Sampling for Stochastic Risk-Averse Learning

1 code implementation NeurIPS 2020 Sebastian Curi, Kfir. Y. Levy, Stefanie Jegelka, Andreas Krause

In high-stakes machine learning applications, it is crucial to not only perform well on average, but also when restricted to difficult examples.

Point Processes

Evaluating GANs via Duality

no code implementations ICLR 2019 Paulina Grnarova, Kfir. Y. Levy, Aurelien Lucchi, Nathanael Perraudin, Thomas Hofmann, Andreas Krause

Generative Adversarial Networks (GANs) have shown great results in accurately modeling complex distributions, but their training is known to be difficult due to instabilities caused by a challenging minimax optimization problem.

Online Variance Reduction with Mixtures

1 code implementation29 Mar 2019 Zalán Borsos, Sebastian Curi, Kfir. Y. Levy, Andreas Krause

Adaptive importance sampling for stochastic optimization is a promising approach that offers improved convergence through variance reduction.

Stochastic Optimization

Multi-Player Bandits: The Adversarial Case

no code implementations21 Feb 2019 Pragnya Alatur, Kfir. Y. Levy, Andreas Krause

We consider a setting where multiple players sequentially choose among a common set of actions (arms).

A Universal Algorithm for Variational Inequalities Adaptive to Smoothness and Noise

no code implementations5 Feb 2019 Francis Bach, Kfir. Y. Levy

We consider variational inequalities coming from monotone operators, a setting that includes convex minimization and convex-concave saddle-point problems.

A domain agnostic measure for monitoring and evaluating GANs

1 code implementation NeurIPS 2019 Paulina Grnarova, Kfir. Y. Levy, Aurelien Lucchi, Nathanael Perraudin, Ian Goodfellow, Thomas Hofmann, Andreas Krause

Evaluations are essential for: (i) relative assessment of different models and (ii) monitoring the progress of a single model throughout training.

Online Adaptive Methods, Universality and Acceleration

no code implementations NeurIPS 2018 Kfir. Y. Levy, Alp Yurtsever, Volkan Cevher

We present a novel method for convex unconstrained optimization that, without any modifications, ensures: (i) accelerated convergence rate for smooth objectives, (ii) standard convergence rate in the general (non-smooth) setting, and (iii) standard convergence rate in the stochastic optimization setting.

Stochastic Optimization

Adaptive Input Estimation in Linear Dynamical Systems with Applications to Learning-from-Observations

no code implementations19 Jun 2018 Sebastian Curi, Kfir. Y. Levy, Andreas Krause

To this end, we introduce a novel estimation algorithm that explicitly trades off bias and variance to optimally reduce the overall estimation error.

Imitation Learning

Faster Rates for Convex-Concave Games

no code implementations17 May 2018 Jacob Abernethy, Kevin A. Lai, Kfir. Y. Levy, Jun-Kun Wang

We consider the use of no-regret algorithms to compute equilibria for particular classes of convex-concave games.

Online Variance Reduction for Stochastic Optimization

2 code implementations13 Feb 2018 Zalán Borsos, Andreas Krause, Kfir. Y. Levy

Modern stochastic optimization methods often rely on uniform sampling which is agnostic to the underlying characteristics of the data.

Stochastic Optimization

Online to Offline Conversions, Universality and Adaptive Minibatch Sizes

no code implementations NeurIPS 2017 Kfir. Y. Levy

We present an approach towards convex optimization that relies on a novel scheme which converts online adaptive algorithms into offline methods.

k*-Nearest Neighbors: From Global to Local

no code implementations NeurIPS 2016 Oren Anava, Kfir. Y. Levy

The weighted k-nearest neighbors algorithm is one of the most fundamental non-parametric methods in pattern recognition and machine learning.

General Classification

The Power of Normalization: Faster Evasion of Saddle Points

no code implementations15 Nov 2016 Kfir. Y. Levy

A commonly used heuristic in non-convex optimization is Normalized Gradient Descent (NGD) - a variant of gradient descent in which only the direction of the gradient is taken into account and its magnitude ignored.

Tensor Decomposition

Beyond Convexity: Stochastic Quasi-Convex Optimization

no code implementations NeurIPS 2015 Elad Hazan, Kfir. Y. Levy, Shai Shalev-Shwartz

The Normalized Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which updates according to the direction of the gradients, rather than the gradients themselves.

On Graduated Optimization for Stochastic Non-Convex Problems

1 code implementation12 Mar 2015 Elad Hazan, Kfir. Y. Levy, Shai Shalev-Shwartz

We extend our algorithm and analysis to the setting of stochastic non-convex optimization with noisy gradient feedback, attaining the same convergence rate.

Logistic Regression: Tight Bounds for Stochastic and Online Optimization

no code implementations15 May 2014 Elad Hazan, Tomer Koren, Kfir. Y. Levy

We show that in contrast to known asymptotic bounds, as long as the number of prediction/optimization iterations is sub exponential, the logistic loss provides no improvement over a generic non-smooth loss function such as the hinge loss.

regression

Cannot find the paper you are looking for? You can Submit a new open access paper.