Search Results for author: Ido Nachum

Found 10 papers, 0 papers with code

Fantastic Generalization Measures are Nowhere to be Found

no code implementations24 Sep 2023 Michael Gastpar, Ido Nachum, Jonathan Shafer, Thomas Weinberger

We study the notion of a generalization bound being uniformly tight, meaning that the difference between the bound and the population loss is small for all learning algorithms and all population distributions.

Generalization Bounds

Finite Littlestone Dimension Implies Finite Information Complexity

no code implementations27 Jun 2022 Aditya Pradeep, Ido Nachum, Michael Gastpar

We prove that every online learnable class of functions of Littlestone dimension $d$ admits a learning algorithm with finite information complexity.

A Johnson--Lindenstrauss Framework for Randomly Initialized CNNs

no code implementations3 Nov 2021 Ido Nachum, Jan Hązła, Michael Gastpar, Anatoly Khina

The celebrated Johnson--Lindenstrauss lemma answers this question for linear fully-connected neural networks (FNNs), stating that the geometry is essentially preserved.

LEMMA

Regularization by Misclassification in ReLU Neural Networks

no code implementations3 Nov 2021 Elisabetta Cornacchia, Jan Hązła, Ido Nachum, Amir Yehudayoff

We study the implicit bias of ReLU neural networks trained by a variant of SGD where at each step, the label is changed with probability $p$ to a random label (label smoothing being a close variant of this procedure).

A Johnson-Lindenstrauss Framework for Randomly Initialized CNNs

no code implementations ICLR 2022 Ido Nachum, Jan Hazla, Michael Gastpar, Anatoly Khina

The celebrated Johnson-Lindenstrauss lemma answers this question for linear fully-connected neural networks (FNNs), stating that the geometry is essentially preserved.

LEMMA

On Symmetry and Initialization for Neural Networks

no code implementations1 Jul 2019 Ido Nachum, Amir Yehudayoff

This work provides an additional step in the theoretical understanding of neural networks.

Average-Case Information Complexity of Learning

no code implementations25 Nov 2018 Ido Nachum, Amir Yehudayoff

Can it be that all concepts in the class require leaking a large amount of information?

On the Perceptron's Compression

no code implementations14 Jun 2018 Shay Moran, Ido Nachum, Itai Panasoff, Amir Yehudayoff

We study and provide exposition to several phenomena that are related to the perceptron's compression.

A Direct Sum Result for the Information Complexity of Learning

no code implementations16 Apr 2018 Ido Nachum, Jonathan Shafer, Amir Yehudayoff

We introduce a class of functions of VC dimension $d$ over the domain $\mathcal{X}$ with information complexity at least $\Omega\left(d\log \log \frac{|\mathcal{X}|}{d}\right)$ bits for any consistent and proper algorithm (deterministic or random).

Learners that Use Little Information

no code implementations14 Oct 2017 Raef Bassily, Shay Moran, Ido Nachum, Jonathan Shafer, Amir Yehudayoff

We discuss an approach that allows us to prove upper bounds on the amount of information that algorithms reveal about their inputs, and also provide a lower bound by showing a simple concept class for which every (possibly randomized) empirical risk minimizer must reveal a lot of information.

Cannot find the paper you are looking for? You can Submit a new open access paper.