no code implementations • 26 Apr 2013 • Roi Livni, Shai Shalev-Shwartz, Ohad Shamir
The main goal of this paper is the derivation of an efficient layer-by-layer algorithm for training such networks, which we denote as the \emph{Basis Learner}.
1 code implementation • NeurIPS 2014 • Roi Livni, Shai Shalev-Shwartz, Ohad Shamir
It is well-known that neural networks are computationally hard to train.
no code implementations • 14 Jan 2015 • Elad Hazan, Roi Livni, Yishay Mansour
We consider classification and regression tasks where we have missing data and assume that the (clean) data resides in a low rank subspace.
no code implementations • 21 Mar 2016 • Elad Hazan, Tomer Koren, Roi Livni, Yishay Mansour
We consider the problem of prediction with expert advice when the losses of the experts have low-dimensional structure: they are restricted to an unknown $d$-dimensional subspace.
no code implementations • 16 Jun 2016 • Roi Livni, Daniel Carmon, Amir Globerson
Infinite--Layer Networks (ILN) have recently been proposed as an architecture that mimics neural networks while enjoying some of the advantages of kernel methods.
no code implementations • NeurIPS 2016 • Michal Feldman, Tomer Koren, Roi Livni, Yishay Mansour, Aviv Zohar
We consider a seller with an unlimited supply of a single good, who is faced with a stream of $T$ buyers.
no code implementations • 24 Feb 2017 • Tomer Koren, Roi Livni, Yishay Mansour
In this setting, we give a new algorithm that establishes a regret of $\widetilde{O}(\sqrt{kT} + T/k)$, where $k$ is the number of actions and $T$ is the time horizon.
no code implementations • ICML 2017 • Roi Livni, Daniel Carmon, Amir Globerson
Infinite Layer Networks (ILN) have been proposed as an architecture that mimics neural networks while enjoying some of the advantages of kernel methods.
no code implementations • 12 Sep 2017 • Pravesh K. Kothari, Roi Livni
We introduce \emph{refutation complexity}, a natural computational analog of Rademacher complexity of a Boolean concept class and show that it exactly characterizes the sample complexity of \emph{efficient} agnostic learning.
no code implementations • NeurIPS 2017 • Tomer Koren, Roi Livni, Yishay Mansour
We consider the non-stochastic Multi-Armed Bandit problem in a setting where there is a fixed and known metric on the action space that determines a cost for switching between any pair of actions.
no code implementations • 16 Nov 2017 • Daniel M. Kane, Roi Livni, Shay Moran, Amir Yehudayoff
To naturally fit into the framework of learning theory, the players can send each other examples (as well as bits) where each example/bit costs one unit of communication.
no code implementations • NeurIPS 2017 • Tomer Koren, Roi Livni
We present a new affine-invariant optimization algorithm called Online Lazy Newton.
no code implementations • 4 Jun 2018 • Noga Alon, Roi Livni, Maryanthe Malliaris, Shay Moran
We show that every approximately differentially private learning algorithm (possibly improper) for a class $H$ with Littlestone dimension~$d$ requires $\Omega\bigl(\log^*(d)\bigr)$ examples.
no code implementations • 9 Feb 2019 • Olivier Bousquet, Roi Livni, Shay Moran
We study the sample complexity of private synthetic data generation over an unbounded sized class of statistical queries, and show that any class that is privately proper PAC learnable admits a private synthetic data generator (perhaps non-efficient).
no code implementations • 13 Feb 2019 • Pravesh K. Kothari, Roi Livni
We study the expressive power of kernel methods and the algorithmic feasibility of multiple kernel learning for a special rich class of kernels.
no code implementations • NeurIPS 2019 • Roi Livni, Yishay Mansour
A function $g\in \mathcal{G}$ distinguishes between two distributions, if the expected value of $g$, on a $k$-tuple of i. i. d examples, on the two distributions is (significantly) different.
no code implementations • NeurIPS 2020 • Idan Amir, Idan Attias, Tomer Koren, Roi Livni, Yishay Mansour
We revisit the fundamental problem of prediction with expert advice, in a setting where the environment is benign and generates losses stochastically, but the feedback observed by the learner is subject to a moderate adversarial corruption.
no code implementations • 1 Mar 2020 • Mark Bun, Roi Livni, Shay Moran
We prove that every concept class with finite Littlestone dimension can be learned by an (approximate) differentially-private algorithm.
no code implementations • NeurIPS 2020 • Assaf Dauber, Meir Feder, Tomer Koren, Roi Livni
The notion of implicit bias, or implicit regularization, has been suggested as a means to explain the surprising generalization ability of modern-days overparameterized learning algorithms.
no code implementations • NeurIPS 2020 • Roi Livni, Shay Moran
PAC-Bayes is a useful framework for deriving generalization bounds which was introduced by McAllester ('98).
no code implementations • NeurIPS 2020 • Olivier Bousquet, Roi Livni, Shay Moran
We study the sample complexity of private synthetic data generation over an unbounded sized class of statistical queries, and show that any class that is privately proper PAC learnable admits a private synthetic data generator (perhaps non-efficient).
no code implementations • 1 Feb 2021 • Idan Amir, Tomer Koren, Roi Livni
We give a new separation result between the generalization performance of stochastic gradient descent (SGD) and of full-batch gradient descent (GD) in the fundamental stochastic convex optimization model.
no code implementations • 2 Feb 2021 • Steve Hanneke, Roi Livni, Shay Moran
More precisely, given any concept class C and any hypothesis class H, we provide nearly tight bounds (up to a log factor) on the optimal mistake bounds for online learning C using predictors from H. Our bound yields an exponential improvement over the previously best known bound by Chase and Freitag (2020).
no code implementations • NeurIPS 2021 • Noah Golowich, Roi Livni
Specifically, we show that if the class $\mathcal{H}$ has constant Littlestone dimension then, given an oblivious sequence of labelled examples, there is a private learner that makes in expectation at most $O(\log T)$ mistakes -- comparable to the optimal mistake bound in the non-private case, up to a logarithmic factor.
no code implementations • NeurIPS 2021 • Idan Amir, Yair Carmon, Tomer Koren, Roi Livni
We study the generalization performance of $\text{full-batch}$ optimization algorithms for stochastic convex optimization: these are first-order methods that only access the exact gradient of the empirical risk (rather than gradients with respect to individual data points), that include a wide range of algorithms such as gradient descent, mirror descent, and their regularized and/or accelerated variants.
no code implementations • 27 Feb 2022 • Tomer Koren, Roi Livni, Yishay Mansour, Uri Sherman
We study to what extent may stochastic gradient descent (SGD) be understood as a "conventional" learning rule that achieves generalization performance by obtaining a good fit to training data.
no code implementations • 27 Feb 2022 • Idan Amir, Roi Livni, Nathan Srebro
We consider linear prediction with a convex Lipschitz loss, or more generally, stochastic convex optimization problems of generalized linear form, i. e.~where each instantaneous loss is a scalar convex function of a linear function.
no code implementations • 19 Apr 2022 • Roi Livni
Or, might it be that inaccurate gradient estimates are \emph{necessary} for finding the minimum of a stochastic convex function at an optimal statistical rate?
no code implementations • 7 Jun 2022 • Idan Amir, Guy Azov, Tomer Koren, Roi Livni
We study best-of-both-worlds algorithms for bandits with switching cost, recently addressed by Rouyer, Seldin and Cesa-Bianchi, 2021.
no code implementations • NeurIPS 2023 • Roi Livni
We examine the relationship between the mutual information between the output model and the empirical sample and the generalization of the algorithm in the context of stochastic convex optimization.
no code implementations • 24 May 2023 • Niva Elkin-Koren, Uri Hacohen, Roi Livni, Shay Moran
In this work, we examine whether such algorithmic stability techniques are suitable to ensure the responsible use of generative models without inadvertently violating copyright laws.
no code implementations • 9 Nov 2023 • Daniel Carmon, Roi Livni, Amir Yehudayoff
In this work we show that in fact $\tilde{O}(\frac{d}{\epsilon}+\frac{1}{\epsilon^2})$ data points are also sufficient.
no code implementations • 14 Feb 2024 • Idan Attias, Gintare Karolina Dziugaite, Mahdi Haghifam, Roi Livni, Daniel M. Roy
In this work, we investigate the interplay between memorization and learning in the context of \emph{stochastic convex optimization} (SCO).
no code implementations • 26 Mar 2024 • Uri Hacohen, Adi Haviv, Shahar Sarfaty, Bruria Friedman, Niva Elkin-Koren, Roi Livni, Amit H Bermano
The advent of Generative Artificial Intelligence (GenAI) models, including GitHub Copilot, OpenAI GPT, and Stable Diffusion, has revolutionized content creation, enabling non-professionals to produce high-quality content across various domains.
no code implementations • 7 Apr 2024 • Roi Livni
We analyze the sample complexity of full-batch Gradient Descent (GD) in the setup of non-smooth Stochastic Convex Optimization.