no code implementations • 27 May 2024 • Zachary Chase, Bogdan Chornomaz, Steve Hanneke, Shay Moran, Amir Yehudayoff
In particular, we prove that for every $d$ there is a class with VC dimension $d$ that cannot be embedded in any extremal class of VC dimension smaller than exponential in $d$.
no code implementations • 9 Nov 2023 • Daniel Carmon, Roi Livni, Amir Yehudayoff
In this work we show that in fact $\tilde{O}(\frac{d}{\epsilon}+\frac{1}{\epsilon^2})$ data points are also sufficient.
no code implementations • 2 Nov 2023 • Zachary Chase, Bogdan Chornomaz, Shay Moran, Amir Yehudayoff
To offer a broader and more comprehensive view of our topological approach, we prove a local variant of the Borsuk-Ulam theorem in topology and a result in combinatorics concerning Kneser colorings.
no code implementations • 8 Apr 2023 • Noga Alon, Shay Moran, Hilla Schefler, Amir Yehudayoff
Learning $\mathcal{H}$ under pure DP is captured by the fractional clique number of $G$.
no code implementations • 7 Apr 2023 • Zachary Chase, Shay Moran, Amir Yehudayoff
Impagliazzo et al. showed how to boost any replicable algorithm so that it produces the same output with probability arbitrarily close to 1.
no code implementations • 3 Mar 2022 • Nataly Brukhim, Daniel Carmon, Irit Dinur, Shay Moran, Amir Yehudayoff
This work resolves this problem: we characterize multiclass PAC learnability through the DS dimension, a combinatorial dimension defined by Daniely and Shalev-Shwartz (2014).
no code implementations • 3 Nov 2021 • Elisabetta Cornacchia, Jan Hązła, Ido Nachum, Amir Yehudayoff
We study the implicit bias of ReLU neural networks trained by a variant of SGD where at each step, the label is changed with probability $p$ to a random label (label smoothing being a close variant of this procedure).
no code implementations • 10 Feb 2021 • Gal Yehuda, Amir Yehudayoff
We prove that at least $\Omega(n^{0. 51})$ hyperplanes are needed to slice all edges of the $n$-dimensional hypercube.
no code implementations • 9 Nov 2020 • Olivier Bousquet, Steve Hanneke, Shay Moran, Ramon van Handel, Amir Yehudayoff
How quickly can a given class of concepts be learned from examples?
no code implementations • 1 Jul 2019 • Ido Nachum, Amir Yehudayoff
This work provides an additional step in the theoretical understanding of neural networks.
no code implementations • Nature Machine Intelligence 2019 • Shai Ben-David, Pavel Hrubeš, Shay Moran, Amir Shpilka, Amir Yehudayoff
We show that, in some cases, a solution to the ‘estimating the maximum’ problem is equivalent to the continuum hypothesis.
no code implementations • 25 Nov 2018 • Ido Nachum, Amir Yehudayoff
Can it be that all concepts in the class require leaking a large amount of information?
no code implementations • 14 Jun 2018 • Shay Moran, Ido Nachum, Itai Panasoff, Amir Yehudayoff
We study and provide exposition to several phenomena that are related to the perceptron's compression.
1 code implementation • 10 Jun 2018 • Ofer M. Shir, Amir Yehudayoff
We consider Evolution Strategies operating only with isotropic Gaussian mutations on positive quadratic objective functions, and investigate the covariance matrix when constructed out of selected individuals by truncation.
no code implementations • 16 Apr 2018 • Ido Nachum, Jonathan Shafer, Amir Yehudayoff
We introduce a class of functions of VC dimension $d$ over the domain $\mathcal{X}$ with information complexity at least $\Omega\left(d\log \log \frac{|\mathcal{X}|}{d}\right)$ bits for any consistent and proper algorithm (deterministic or random).
no code implementations • 16 Nov 2017 • Daniel M. Kane, Roi Livni, Shay Moran, Amir Yehudayoff
To naturally fit into the framework of learning theory, the players can send each other examples (as well as bits) where each example/bit costs one unit of communication.
no code implementations • 14 Nov 2017 • Shai Ben-David, Pavel Hrubes, Shay Moran, Amir Shpilka, Amir Yehudayoff
We consider the following statistical estimation problem: given a family F of real valued functions over some domain X and an i. i. d.
no code implementations • 14 Oct 2017 • Raef Bassily, Shay Moran, Ido Nachum, Jonathan Shafer, Amir Yehudayoff
We discuss an approach that allows us to prove upper bounds on the amount of information that algorithms reveal about their inputs, and also provide a lower bound by showing a simple concept class for which every (possibly randomized) empirical risk minimizer must reveal a lot of information.
no code implementations • NeurIPS 2017 • Noga Alon, Moshe Babaioff, Yannai A. Gonczarowski, Yishay Mansour, Shay Moran, Amir Yehudayoff
In this work we derive a variant of the classic Glivenko-Cantelli Theorem, which asserts uniform convergence of the empirical Cumulative Distribution Function (CDF) to the CDF of the underlying distribution.
no code implementations • NeurIPS 2016 • Ofir David, Shay Moran, Amir Yehudayoff
This work continues the study of the relationship between sample compression schemes and statistical learning, which has been mostly investigated within the framework of binary classification.
no code implementations • 12 Oct 2016 • Ofir David, Shay Moran, Amir Yehudayoff
(iv) A dichotomy for sample compression in multiclass categorization problems: If a non-trivial compression exists then a compression of logarithmic size exists.
no code implementations • 23 Jun 2016 • Ofer M. Shir, Jonathan Roslund, Amir Yehudayoff
We study the theoretical capacity to statistically learn local landscape information by Evolution Strategies (ESs).
no code implementations • 24 Mar 2015 • Shay Moran, Amir Yehudayoff
Sample compression schemes were defined by Littlestone and Warmuth (1986) as an abstraction of the structure underlying many learning algorithms.
no code implementations • 22 Feb 2015 • Shay Moran, Amir Shpilka, Avi Wigderson, Amir Yehudayoff
We further construct sample compression schemes of size $k$ for $C$, with additional information of $k \log(k)$ bits.