Search Results for author: Itay Evron

Found 8 papers, 3 papers with code

Provable Tempered Overfitting of Minimal Nets and Typical Nets

no code implementations24 Oct 2024 Itamar Harel, William M. Hoza, Gal Vardi, Itay Evron, Nathan Srebro, Daniel Soudry

We study the overfitting behavior of fully connected deep Neural Networks (NNs) with binary weights fitted to perfectly classify a noisy training set.

The Role of Codeword-to-Class Assignments in Error-Correcting Codes: An Empirical Study

1 code implementation10 Feb 2023 Itay Evron, Ophir Onn, Tamar Weiss Orzech, Hai Azeroual, Daniel Soudry

Error-correcting codes (ECC) are used to reduce multiclass classification tasks to multiple binary classification subproblems.

Binary Classification Classification

How catastrophic can catastrophic forgetting be in linear regression?

no code implementations19 May 2022 Itay Evron, Edward Moroshko, Rachel Ward, Nati Srebro, Daniel Soudry

In specific settings, we highlight differences between forgetting and convergence to the offline solution as studied in those areas.

Continual Learning regression

Towards Learning of Filter-Level Heterogeneous Compression of Convolutional Neural Networks

2 code implementations22 Apr 2019 Yochai Zur, Chaim Baskin, Evgenii Zheltonozhskii, Brian Chmiel, Itay Evron, Alex M. Bronstein, Avi Mendelson

While mainstream deep learning methods train the neural networks weights while keeping the network architecture fixed, the emerging neural architecture search (NAS) techniques make the latter also amenable to training.

Network Pruning Neural Architecture Search +1

How do infinite width bounded norm networks look in function space?

no code implementations13 Feb 2019 Pedro Savarese, Itay Evron, Daniel Soudry, Nathan Srebro

We consider the question of what functions can be captured by ReLU networks with an unbounded number of units (infinite width), but where the overall network Euclidean norm (sum of squares of all weights in the system, except for an unregularized bias term for each unit) is bounded; or equivalently what is the minimal norm required to approximate a given function.

Efficient Loss-Based Decoding on Graphs For Extreme Classification

1 code implementation NeurIPS 2018 Itay Evron, Edward Moroshko, Koby Crammer

We build on a recent extreme classification framework with logarithmic time and space, and on a general approach for error correcting output coding (ECOC) with loss-based decoding, and introduce a flexible and efficient approach accompanied by theoretical bounds.

Classification General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.