Search Results for author: Amir Yehudayoff

Found 23 papers, 1 papers with code

The Sample Complexity Of ERMs In Stochastic Convex Optimization

no code implementations9 Nov 2023 Daniel Carmon, Roi Livni, Amir Yehudayoff

In this work we show that in fact $\tilde{O}(\frac{d}{\epsilon}+\frac{1}{\epsilon^2})$ data points are also sufficient.

Local Borsuk-Ulam, Stability, and Replicability

no code implementations2 Nov 2023 Zachary Chase, Bogdan Chornomaz, Shay Moran, Amir Yehudayoff

To offer a broader and more comprehensive view of our topological approach, we prove a local variant of the Borsuk-Ulam theorem in topology and a result in combinatorics concerning Kneser colorings.

A Unified Characterization of Private Learnability via Graph Theory

no code implementations8 Apr 2023 Noga Alon, Shay Moran, Hilla Schefler, Amir Yehudayoff

Learning $\mathcal{H}$ under pure DP is captured by the fractional clique number of $G$.

Replicability and stability in learning

no code implementations7 Apr 2023 Zachary Chase, Shay Moran, Amir Yehudayoff

Impagliazzo et al. showed how to boost any replicable algorithm so that it produces the same output with probability arbitrarily close to 1.

A Characterization of Multiclass Learnability

no code implementations3 Mar 2022 Nataly Brukhim, Daniel Carmon, Irit Dinur, Shay Moran, Amir Yehudayoff

This work resolves this problem: we characterize multiclass PAC learnability through the DS dimension, a combinatorial dimension defined by Daniely and Shalev-Shwartz (2014).

Learning Theory Open-Ended Question Answering +1

Regularization by Misclassification in ReLU Neural Networks

no code implementations3 Nov 2021 Elisabetta Cornacchia, Jan Hązła, Ido Nachum, Amir Yehudayoff

We study the implicit bias of ReLU neural networks trained by a variant of SGD where at each step, the label is changed with probability $p$ to a random label (label smoothing being a close variant of this procedure).

Slicing the hypercube is not easy

no code implementations10 Feb 2021 Gal Yehuda, Amir Yehudayoff

We prove that at least $\Omega(n^{0. 51})$ hyperplanes are needed to slice all edges of the $n$-dimensional hypercube.

On Symmetry and Initialization for Neural Networks

no code implementations1 Jul 2019 Ido Nachum, Amir Yehudayoff

This work provides an additional step in the theoretical understanding of neural networks.

Learnability can be undecidable

no code implementations Nature Machine Intelligence 2019 Shai Ben-David, Pavel Hrubeš, Shay Moran, Amir Shpilka, Amir Yehudayoff

We show that, in some cases, a solution to the ‘estimating the maximum’ problem is equivalent to the continuum hypothesis.

BIG-bench Machine Learning PAC learning

Average-Case Information Complexity of Learning

no code implementations25 Nov 2018 Ido Nachum, Amir Yehudayoff

Can it be that all concepts in the class require leaking a large amount of information?

On the Perceptron's Compression

no code implementations14 Jun 2018 Shay Moran, Ido Nachum, Itai Panasoff, Amir Yehudayoff

We study and provide exposition to several phenomena that are related to the perceptron's compression.

On the Covariance-Hessian Relation in Evolution Strategies

1 code implementation10 Jun 2018 Ofer M. Shir, Amir Yehudayoff

We consider Evolution Strategies operating only with isotropic Gaussian mutations on positive quadratic objective functions, and investigate the covariance matrix when constructed out of selected individuals by truncation.

Relation

A Direct Sum Result for the Information Complexity of Learning

no code implementations16 Apr 2018 Ido Nachum, Jonathan Shafer, Amir Yehudayoff

We introduce a class of functions of VC dimension $d$ over the domain $\mathcal{X}$ with information complexity at least $\Omega\left(d\log \log \frac{|\mathcal{X}|}{d}\right)$ bits for any consistent and proper algorithm (deterministic or random).

On Communication Complexity of Classification Problems

no code implementations16 Nov 2017 Daniel M. Kane, Roi Livni, Shay Moran, Amir Yehudayoff

To naturally fit into the framework of learning theory, the players can send each other examples (as well as bits) where each example/bit costs one unit of communication.

BIG-bench Machine Learning Classification +2

A learning problem that is independent of the set theory ZFC axioms

no code implementations14 Nov 2017 Shai Ben-David, Pavel Hrubes, Shay Moran, Amir Shpilka, Amir Yehudayoff

We consider the following statistical estimation problem: given a family F of real valued functions over some domain X and an i. i. d.

General Classification PAC learning

Learners that Use Little Information

no code implementations14 Oct 2017 Raef Bassily, Shay Moran, Ido Nachum, Jonathan Shafer, Amir Yehudayoff

We discuss an approach that allows us to prove upper bounds on the amount of information that algorithms reveal about their inputs, and also provide a lower bound by showing a simple concept class for which every (possibly randomized) empirical risk minimizer must reveal a lot of information.

Submultiplicative Glivenko-Cantelli and Uniform Convergence of Revenues

no code implementations NeurIPS 2017 Noga Alon, Moshe Babaioff, Yannai A. Gonczarowski, Yishay Mansour, Shay Moran, Amir Yehudayoff

In this work we derive a variant of the classic Glivenko-Cantelli Theorem, which asserts uniform convergence of the empirical Cumulative Distribution Function (CDF) to the CDF of the underlying distribution.

Supervised learning through the lens of compression

no code implementations NeurIPS 2016 Ofir David, Shay Moran, Amir Yehudayoff

This work continues the study of the relationship between sample compression schemes and statistical learning, which has been mostly investigated within the framework of binary classification.

Binary Classification

On statistical learning via the lens of compression

no code implementations12 Oct 2016 Ofir David, Shay Moran, Amir Yehudayoff

(iv) A dichotomy for sample compression in multiclass categorization problems: If a non-trivial compression exists then a compression of logarithmic size exists.

Binary Classification Learning Theory

On the Theoretical Capacity of Evolution Strategies to Statistically Learn the Landscape Hessian

no code implementations23 Jun 2016 Ofer M. Shir, Jonathan Roslund, Amir Yehudayoff

We study the theoretical capacity to statistically learn local landscape information by Evolution Strategies (ESs).

Sample compression schemes for VC classes

no code implementations24 Mar 2015 Shay Moran, Amir Yehudayoff

Sample compression schemes were defined by Littlestone and Warmuth (1986) as an abstraction of the structure underlying many learning algorithms.

Teaching and compressing for low VC-dimension

no code implementations22 Feb 2015 Shay Moran, Amir Shpilka, Avi Wigderson, Amir Yehudayoff

We further construct sample compression schemes of size $k$ for $C$, with additional information of $k \log(k)$ bits.

Cannot find the paper you are looking for? You can Submit a new open access paper.