no code implementations • 24 Oct 2024 • Itamar Harel, William M. Hoza, Gal Vardi, Itay Evron, Nathan Srebro, Daniel Soudry
We study the overfitting behavior of fully connected deep Neural Networks (NNs) with binary weights fitted to perfectly classify a noisy training set.
no code implementations • 23 Jan 2024 • Daniel Goldfarb, Itay Evron, Nir Weinberger, Daniel Soudry, Paul Hand
Previous works have analyzed separately how forgetting is affected by either task similarity or overparameterization.
no code implementations • 6 Jun 2023 • Itay Evron, Edward Moroshko, Gon Buzaglo, Maroun Khriesh, Badea Marjieh, Nathan Srebro, Daniel Soudry
We analyze continual learning on a sequence of separable linear classification tasks with binary labels.
1 code implementation • 10 Feb 2023 • Itay Evron, Ophir Onn, Tamar Weiss Orzech, Hai Azeroual, Daniel Soudry
Error-correcting codes (ECC) are used to reduce multiclass classification tasks to multiple binary classification subproblems.
no code implementations • 19 May 2022 • Itay Evron, Edward Moroshko, Rachel Ward, Nati Srebro, Daniel Soudry
In specific settings, we highlight differences between forgetting and convergence to the offline solution as studied in those areas.
2 code implementations • 22 Apr 2019 • Yochai Zur, Chaim Baskin, Evgenii Zheltonozhskii, Brian Chmiel, Itay Evron, Alex M. Bronstein, Avi Mendelson
While mainstream deep learning methods train the neural networks weights while keeping the network architecture fixed, the emerging neural architecture search (NAS) techniques make the latter also amenable to training.
no code implementations • 13 Feb 2019 • Pedro Savarese, Itay Evron, Daniel Soudry, Nathan Srebro
We consider the question of what functions can be captured by ReLU networks with an unbounded number of units (infinite width), but where the overall network Euclidean norm (sum of squares of all weights in the system, except for an unregularized bias term for each unit) is bounded; or equivalently what is the minimal norm required to approximate a given function.
1 code implementation • NeurIPS 2018 • Itay Evron, Edward Moroshko, Koby Crammer
We build on a recent extreme classification framework with logarithmic time and space, and on a general approach for error correcting output coding (ECOC) with loss-based decoding, and introduce a flexible and efficient approach accompanied by theoretical bounds.