Search Results for author: Roi Livni

Found 35 papers, 1 papers with code

An Algorithm for Training Polynomial Networks

no code implementations26 Apr 2013 Roi Livni, Shai Shalev-Shwartz, Ohad Shamir

The main goal of this paper is the derivation of an efficient layer-by-layer algorithm for training such networks, which we denote as the \emph{Basis Learner}.

Classification with Low Rank and Missing Data

no code implementations14 Jan 2015 Elad Hazan, Roi Livni, Yishay Mansour

We consider classification and regression tasks where we have missing data and assume that the (clean) data resides in a low rank subspace.

Classification General Classification +1

Online Learning with Low Rank Experts

no code implementations21 Mar 2016 Elad Hazan, Tomer Koren, Roi Livni, Yishay Mansour

We consider the problem of prediction with expert advice when the losses of the experts have low-dimensional structure: they are restricted to an unknown $d$-dimensional subspace.

Learning Infinite-Layer Networks: Without the Kernel Trick

no code implementations16 Jun 2016 Roi Livni, Daniel Carmon, Amir Globerson

Infinite--Layer Networks (ILN) have recently been proposed as an architecture that mimics neural networks while enjoying some of the advantages of kernel methods.

Online Pricing with Strategic and Patient Buyers

no code implementations NeurIPS 2016 Michal Feldman, Tomer Koren, Roi Livni, Yishay Mansour, Aviv Zohar

We consider a seller with an unlimited supply of a single good, who is faced with a stream of $T$ buyers.

Bandits with Movement Costs and Adaptive Pricing

no code implementations24 Feb 2017 Tomer Koren, Roi Livni, Yishay Mansour

In this setting, we give a new algorithm that establishes a regret of $\widetilde{O}(\sqrt{kT} + T/k)$, where $k$ is the number of actions and $T$ is the time horizon.

Learning Infinite Layer Networks without the Kernel Trick

no code implementations ICML 2017 Roi Livni, Daniel Carmon, Amir Globerson

Infinite Layer Networks (ILN) have been proposed as an architecture that mimics neural networks while enjoying some of the advantages of kernel methods.

Agnostic Learning by Refuting

no code implementations12 Sep 2017 Pravesh K. Kothari, Roi Livni

We introduce \emph{refutation complexity}, a natural computational analog of Rademacher complexity of a Boolean concept class and show that it exactly characterizes the sample complexity of \emph{efficient} agnostic learning.

PAC learning

Multi-Armed Bandits with Metric Movement Costs

no code implementations NeurIPS 2017 Tomer Koren, Roi Livni, Yishay Mansour

We consider the non-stochastic Multi-Armed Bandit problem in a setting where there is a fixed and known metric on the action space that determines a cost for switching between any pair of actions.

Multi-Armed Bandits

On Communication Complexity of Classification Problems

no code implementations16 Nov 2017 Daniel M. Kane, Roi Livni, Shay Moran, Amir Yehudayoff

To naturally fit into the framework of learning theory, the players can send each other examples (as well as bits) where each example/bit costs one unit of communication.

BIG-bench Machine Learning Classification +2

Affine-Invariant Online Optimization and the Low-rank Experts Problem

no code implementations NeurIPS 2017 Tomer Koren, Roi Livni

We present a new affine-invariant optimization algorithm called Online Lazy Newton.

Private PAC learning implies finite Littlestone dimension

no code implementations4 Jun 2018 Noga Alon, Roi Livni, Maryanthe Malliaris, Shay Moran

We show that every approximately differentially private learning algorithm (possibly improper) for a class $H$ with Littlestone dimension~$d$ requires $\Omega\bigl(\log^*(d)\bigr)$ examples.

Open-Ended Question Answering PAC learning

Synthetic Data Generators: Sequential and Private

no code implementations9 Feb 2019 Olivier Bousquet, Roi Livni, Shay Moran

We study the sample complexity of private synthetic data generation over an unbounded sized class of statistical queries, and show that any class that is privately proper PAC learnable admits a private synthetic data generator (perhaps non-efficient).

Synthetic Data Generation

On the Expressive Power of Kernel Methods and the Efficiency of Kernel Learning by Association Schemes

no code implementations13 Feb 2019 Pravesh K. Kothari, Roi Livni

We study the expressive power of kernel methods and the algorithmic feasibility of multiple kernel learning for a special rich class of kernels.

Graph-based Discriminators: Sample Complexity and Expressiveness

no code implementations NeurIPS 2019 Roi Livni, Yishay Mansour

A function $g\in \mathcal{G}$ distinguishes between two distributions, if the expected value of $g$, on a $k$-tuple of i. i. d examples, on the two distributions is (significantly) different.

Learning Theory

Prediction with Corrupted Expert Advice

no code implementations NeurIPS 2020 Idan Amir, Idan Attias, Tomer Koren, Roi Livni, Yishay Mansour

We revisit the fundamental problem of prediction with expert advice, in a setting where the environment is benign and generates losses stochastically, but the feedback observed by the learner is subject to a moderate adversarial corruption.

An Equivalence Between Private Classification and Online Prediction

no code implementations1 Mar 2020 Mark Bun, Roi Livni, Shay Moran

We prove that every concept class with finite Littlestone dimension can be learned by an (approximate) differentially-private algorithm.

Classification General Classification +1

Can Implicit Bias Explain Generalization? Stochastic Convex Optimization as a Case Study

no code implementations NeurIPS 2020 Assaf Dauber, Meir Feder, Tomer Koren, Roi Livni

The notion of implicit bias, or implicit regularization, has been suggested as a means to explain the surprising generalization ability of modern-days overparameterized learning algorithms.

A Limitation of the PAC-Bayes Framework

no code implementations NeurIPS 2020 Roi Livni, Shay Moran

PAC-Bayes is a useful framework for deriving generalization bounds which was introduced by McAllester ('98).

Generalization Bounds

Synthetic Data Generators -- Sequential and Private

no code implementations NeurIPS 2020 Olivier Bousquet, Roi Livni, Shay Moran

We study the sample complexity of private synthetic data generation over an unbounded sized class of statistical queries, and show that any class that is privately proper PAC learnable admits a private synthetic data generator (perhaps non-efficient).

Synthetic Data Generation

SGD Generalizes Better Than GD (And Regularization Doesn't Help)

no code implementations1 Feb 2021 Idan Amir, Tomer Koren, Roi Livni

We give a new separation result between the generalization performance of stochastic gradient descent (SGD) and of full-batch gradient descent (GD) in the fundamental stochastic convex optimization model.

Online Learning with Simple Predictors and a Combinatorial Characterization of Minimax in 0/1 Games

no code implementations2 Feb 2021 Steve Hanneke, Roi Livni, Shay Moran

More precisely, given any concept class C and any hypothesis class H, we provide nearly tight bounds (up to a log factor) on the optimal mistake bounds for online learning C using predictors from H. Our bound yields an exponential improvement over the previously best known bound by Chase and Freitag (2020).

Littlestone Classes are Privately Online Learnable

no code implementations NeurIPS 2021 Noah Golowich, Roi Livni

Specifically, we show that if the class $\mathcal{H}$ has constant Littlestone dimension then, given an oblivious sequence of labelled examples, there is a private learner that makes in expectation at most $O(\log T)$ mistakes -- comparable to the optimal mistake bound in the non-private case, up to a logarithmic factor.

Never Go Full Batch (in Stochastic Convex Optimization)

no code implementations NeurIPS 2021 Idan Amir, Yair Carmon, Tomer Koren, Roi Livni

We study the generalization performance of $\text{full-batch}$ optimization algorithms for stochastic convex optimization: these are first-order methods that only access the exact gradient of the empirical risk (rather than gradients with respect to individual data points), that include a wide range of algorithms such as gradient descent, mirror descent, and their regularized and/or accelerated variants.

Benign Underfitting of Stochastic Gradient Descent

no code implementations27 Feb 2022 Tomer Koren, Roi Livni, Yishay Mansour, Uri Sherman

We study to what extent may stochastic gradient descent (SGD) be understood as a "conventional" learning rule that achieves generalization performance by obtaining a good fit to training data.

Thinking Outside the Ball: Optimal Learning with Gradient Descent for Generalized Linear Stochastic Convex Optimization

no code implementations27 Feb 2022 Idan Amir, Roi Livni, Nathan Srebro

We consider linear prediction with a convex Lipschitz loss, or more generally, stochastic convex optimization problems of generalized linear form, i. e.~where each instantaneous loss is a scalar convex function of a linear function.

Making Progress Based on False Discoveries

no code implementations19 Apr 2022 Roi Livni

Or, might it be that inaccurate gradient estimates are \emph{necessary} for finding the minimum of a stochastic convex function at an optimal statistical rate?

Better Best of Both Worlds Bounds for Bandits with Switching Costs

no code implementations7 Jun 2022 Idan Amir, Guy Azov, Tomer Koren, Roi Livni

We study best-of-both-worlds algorithms for bandits with switching cost, recently addressed by Rouyer, Seldin and Cesa-Bianchi, 2021.

Information Theoretic Lower Bounds for Information Theoretic Upper Bounds

no code implementations NeurIPS 2023 Roi Livni

We examine the relationship between the mutual information between the output model and the empirical sample and the generalization of the algorithm in the context of stochastic convex optimization.

Generalization Bounds

Can Copyright be Reduced to Privacy?

no code implementations24 May 2023 Niva Elkin-Koren, Uri Hacohen, Roi Livni, Shay Moran

In this work, we examine whether such algorithmic stability techniques are suitable to ensure the responsible use of generative models without inadvertently violating copyright laws.

The Sample Complexity Of ERMs In Stochastic Convex Optimization

no code implementations9 Nov 2023 Daniel Carmon, Roi Livni, Amir Yehudayoff

In this work we show that in fact $\tilde{O}(\frac{d}{\epsilon}+\frac{1}{\epsilon^2})$ data points are also sufficient.

Not All Similarities Are Created Equal: Leveraging Data-Driven Biases to Inform GenAI Copyright Disputes

no code implementations26 Mar 2024 Uri Hacohen, Adi Haviv, Shahar Sarfaty, Bruria Friedman, Niva Elkin-Koren, Roi Livni, Amit H Bermano

The advent of Generative Artificial Intelligence (GenAI) models, including GitHub Copilot, OpenAI GPT, and Stable Diffusion, has revolutionized content creation, enabling non-professionals to produce high-quality content across various domains.

The Sample Complexity of Gradient Descent in Stochastic Convex Optimization

no code implementations7 Apr 2024 Roi Livni

We analyze the sample complexity of full-batch Gradient Descent (GD) in the setup of non-smooth Stochastic Convex Optimization.

Cannot find the paper you are looking for? You can Submit a new open access paper.