Search Results for author: Roi Livni

Found 29 papers, 1 papers with code

Better Best of Both Worlds Bounds for Bandits with Switching Costs

no code implementations7 Jun 2022 Idan Amir, Guy Azov, Tomer Koren, Roi Livni

We study best-of-both-worlds algorithms for bandits with switching cost, recently addressed by Rouyer, Seldin and Cesa-Bianchi, 2021.

Making Progress Based on False Discoveries

no code implementations19 Apr 2022 Roi Livni

Second, we show that, under certain assumptions on the oracle, in an interaction with gradient descent $\tilde \Omega(1/\epsilon^{2. 5})$ samples are necessary.

Thinking Outside the Ball: Optimal Learning with Gradient Descent for Generalized Linear Stochastic Convex Optimization

no code implementations27 Feb 2022 Idan Amir, Roi Livni, Nathan Srebro

We consider linear prediction with a convex Lipschitz loss, or more generally, stochastic convex optimization problems of generalized linear form, i. e.~where each instantaneous loss is a scalar convex function of a linear function.

Benign Underfitting of Stochastic Gradient Descent

no code implementations27 Feb 2022 Tomer Koren, Roi Livni, Yishay Mansour, Uri Sherman

We study to what extent may stochastic gradient descent (SGD) be understood as a "conventional" learning rule that achieves generalization performance by obtaining a good fit to training data.

Never Go Full Batch (in Stochastic Convex Optimization)

no code implementations NeurIPS 2021 Idan Amir, Yair Carmon, Tomer Koren, Roi Livni

We study the generalization performance of $\text{full-batch}$ optimization algorithms for stochastic convex optimization: these are first-order methods that only access the exact gradient of the empirical risk (rather than gradients with respect to individual data points), that include a wide range of algorithms such as gradient descent, mirror descent, and their regularized and/or accelerated variants.

Littlestone Classes are Privately Online Learnable

no code implementations NeurIPS 2021 Noah Golowich, Roi Livni

Specifically, we show that if the class $\mathcal{H}$ has constant Littlestone dimension then, given an oblivious sequence of labelled examples, there is a private learner that makes in expectation at most $O(\log T)$ mistakes -- comparable to the optimal mistake bound in the non-private case, up to a logarithmic factor.

online learning

Online Learning with Simple Predictors and a Combinatorial Characterization of Minimax in 0/1 Games

no code implementations2 Feb 2021 Steve Hanneke, Roi Livni, Shay Moran

More precisely, given any concept class C and any hypothesis class H, we provide nearly tight bounds (up to a log factor) on the optimal mistake bounds for online learning C using predictors from H. Our bound yields an exponential improvement over the previously best known bound by Chase and Freitag (2020).

online learning

SGD Generalizes Better Than GD (And Regularization Doesn't Help)

no code implementations1 Feb 2021 Idan Amir, Tomer Koren, Roi Livni

We give a new separation result between the generalization performance of stochastic gradient descent (SGD) and of full-batch gradient descent (GD) in the fundamental stochastic convex optimization model.

Synthetic Data Generators -- Sequential and Private

no code implementations NeurIPS 2020 Olivier Bousquet, Roi Livni, Shay Moran

We study the sample complexity of private synthetic data generation over an unbounded sized class of statistical queries, and show that any class that is privately proper PAC learnable admits a private synthetic data generator (perhaps non-efficient).

Synthetic Data Generation

A Limitation of the PAC-Bayes Framework

no code implementations NeurIPS 2020 Roi Livni, Shay Moran

PAC-Bayes is a useful framework for deriving generalization bounds which was introduced by McAllester ('98).

Generalization Bounds

Can Implicit Bias Explain Generalization? Stochastic Convex Optimization as a Case Study

no code implementations NeurIPS 2020 Assaf Dauber, Meir Feder, Tomer Koren, Roi Livni

The notion of implicit bias, or implicit regularization, has been suggested as a means to explain the surprising generalization ability of modern-days overparameterized learning algorithms.

An Equivalence Between Private Classification and Online Prediction

no code implementations1 Mar 2020 Mark Bun, Roi Livni, Shay Moran

We prove that every concept class with finite Littlestone dimension can be learned by an (approximate) differentially-private algorithm.

Classification General Classification

Prediction with Corrupted Expert Advice

no code implementations NeurIPS 2020 Idan Amir, Idan Attias, Tomer Koren, Roi Livni, Yishay Mansour

We revisit the fundamental problem of prediction with expert advice, in a setting where the environment is benign and generates losses stochastically, but the feedback observed by the learner is subject to a moderate adversarial corruption.

Graph-based Discriminators: Sample Complexity and Expressiveness

no code implementations NeurIPS 2019 Roi Livni, Yishay Mansour

A function $g\in \mathcal{G}$ distinguishes between two distributions, if the expected value of $g$, on a $k$-tuple of i. i. d examples, on the two distributions is (significantly) different.

Learning Theory

On the Expressive Power of Kernel Methods and the Efficiency of Kernel Learning by Association Schemes

no code implementations13 Feb 2019 Pravesh K. Kothari, Roi Livni

We study the expressive power of kernel methods and the algorithmic feasibility of multiple kernel learning for a special rich class of kernels.

Synthetic Data Generators: Sequential and Private

no code implementations9 Feb 2019 Olivier Bousquet, Roi Livni, Shay Moran

We study the sample complexity of private synthetic data generation over an unbounded sized class of statistical queries, and show that any class that is privately proper PAC learnable admits a private synthetic data generator (perhaps non-efficient).

Synthetic Data Generation

Private PAC learning implies finite Littlestone dimension

no code implementations4 Jun 2018 Noga Alon, Roi Livni, Maryanthe Malliaris, Shay Moran

We show that every approximately differentially private learning algorithm (possibly improper) for a class $H$ with Littlestone dimension~$d$ requires $\Omega\bigl(\log^*(d)\bigr)$ examples.

PAC learning

Affine-Invariant Online Optimization and the Low-rank Experts Problem

no code implementations NeurIPS 2017 Tomer Koren, Roi Livni

We present a new affine-invariant optimization algorithm called Online Lazy Newton.

On Communication Complexity of Classification Problems

no code implementations16 Nov 2017 Daniel M. Kane, Roi Livni, Shay Moran, Amir Yehudayoff

To naturally fit into the framework of learning theory, the players can send each other examples (as well as bits) where each example/bit costs one unit of communication.

Classification General Classification +1

Multi-Armed Bandits with Metric Movement Costs

no code implementations NeurIPS 2017 Tomer Koren, Roi Livni, Yishay Mansour

We consider the non-stochastic Multi-Armed Bandit problem in a setting where there is a fixed and known metric on the action space that determines a cost for switching between any pair of actions.

Multi-Armed Bandits

Agnostic Learning by Refuting

no code implementations12 Sep 2017 Pravesh K. Kothari, Roi Livni

We introduce \emph{refutation complexity}, a natural computational analog of Rademacher complexity of a Boolean concept class and show that it exactly characterizes the sample complexity of \emph{efficient} agnostic learning.

PAC learning

Learning Infinite Layer Networks without the Kernel Trick

no code implementations ICML 2017 Roi Livni, Daniel Carmon, Amir Globerson

Infinite Layer Networks (ILN) have been proposed as an architecture that mimics neural networks while enjoying some of the advantages of kernel methods.

Bandits with Movement Costs and Adaptive Pricing

no code implementations24 Feb 2017 Tomer Koren, Roi Livni, Yishay Mansour

In this setting, we give a new algorithm that establishes a regret of $\widetilde{O}(\sqrt{kT} + T/k)$, where $k$ is the number of actions and $T$ is the time horizon.

Online Pricing with Strategic and Patient Buyers

no code implementations NeurIPS 2016 Michal Feldman, Tomer Koren, Roi Livni, Yishay Mansour, Aviv Zohar

We consider a seller with an unlimited supply of a single good, who is faced with a stream of $T$ buyers.

Learning Infinite-Layer Networks: Without the Kernel Trick

no code implementations16 Jun 2016 Roi Livni, Daniel Carmon, Amir Globerson

Infinite--Layer Networks (ILN) have recently been proposed as an architecture that mimics neural networks while enjoying some of the advantages of kernel methods.

Online Learning with Low Rank Experts

no code implementations21 Mar 2016 Elad Hazan, Tomer Koren, Roi Livni, Yishay Mansour

We consider the problem of prediction with expert advice when the losses of the experts have low-dimensional structure: they are restricted to an unknown $d$-dimensional subspace.

online learning

Classification with Low Rank and Missing Data

no code implementations14 Jan 2015 Elad Hazan, Roi Livni, Yishay Mansour

We consider classification and regression tasks where we have missing data and assume that the (clean) data resides in a low rank subspace.

Classification General Classification

An Algorithm for Training Polynomial Networks

no code implementations26 Apr 2013 Roi Livni, Shai Shalev-Shwartz, Ohad Shamir

The main goal of this paper is the derivation of an efficient layer-by-layer algorithm for training such networks, which we denote as the \emph{Basis Learner}.

Cannot find the paper you are looking for? You can Submit a new open access paper.