1 code implementation • NeurIPS 2021 • Amir Zandieh, Insu Han, Haim Avron, Neta Shoham, Chaewon Kim, Jinwoo Shin
To accelerate learning with NTK, we design a near input-sparsity time approximation algorithm for NTK, by sketching the polynomial expansions of arc-cosine kernels: our sketch for the convolutional counterpart of NTK (CNTK) can transform any image using a linear runtime in the number of pixels.
no code implementations • 26 Apr 2021 • Neta Shoham, Tomer Avidor, Nadav Israel
In this work we aim at closing this gap by showing that losses, which incorporate an output regularization term, become symmetric as the regularization coefficient goes to infinity.
no code implementations • 3 Apr 2021 • Insu Han, Haim Avron, Neta Shoham, Chaewon Kim, Jinwoo Shin
We combine random features of the arc-cosine kernels with a sketching-based algorithm which can run in linear with respect to both the number of data points and input dimension.
no code implementations • 27 Sep 2020 • Neta Shoham, Haim Avron
Unfortunately, classical theory on optimal experimental design focuses on selecting examples in order to learn underparameterized (and thus, non-interpolative) models, while modern machine learning models such as deep neural networks are overparameterized, and oftentimes are trained to be interpolative.
no code implementations • 17 Oct 2019 • Neta Shoham, Tomer Avidor, Aviv Keren, Nadav Israel, Daniel Benditkis, Liron Mor-Yosef, Itai Zeitak
Building on an analogy with Lifelong Learning, we adapt a solution for catastrophic forgetting to Federated Learning.