Search Results for author: Itay Golan

Found 6 papers, 5 papers with code

Why Cold Posteriors? On the Suboptimal Generalization of Optimal Bayes Estimates

no code implementations pproximateinference AABI Symposium 2021 Chen Zeno, Itay Golan, Ari Pakman, Daniel Soudry

Recent works have shown that the predictive accuracy of Bayesian deep learning models exhibit substantial improvements when the posterior is raised to a 1/T power with T<1.

Task Agnostic Continual Learning Using Online Variational Bayes with Fixed-Point Updates

1 code implementation1 Oct 2020 Chen Zeno, Itay Golan, Elad Hoffer, Daniel Soudry

The optimal Bayesian solution for this requires an intractable online Bayes update to the weights posterior.

Continual Learning

Kernel and Rich Regimes in Overparametrized Models

1 code implementation20 Feb 2020 Blake Woodworth, Suriya Gunasekar, Jason D. Lee, Edward Moroshko, Pedro Savarese, Itay Golan, Daniel Soudry, Nathan Srebro

We provide a complete and detailed analysis for a family of simple depth-$D$ models that already exhibit an interesting and meaningful transition between the kernel and rich regimes, and we also demonstrate this transition empirically for more complex matrix factorization models and multilayer non-linear networks.

Kernel and Rich Regimes in Overparametrized Models

1 code implementation13 Jun 2019 Blake Woodworth, Suriya Gunasekar, Pedro Savarese, Edward Moroshko, Itay Golan, Jason Lee, Daniel Soudry, Nathan Srebro

A recent line of work studies overparametrized neural networks in the "kernel regime," i. e. when the network behaves during training as a kernelized linear predictor, and thus training with gradient descent has the effect of finding the minimum RKHS norm solution.

Task Agnostic Continual Learning Using Online Variational Bayes

2 code implementations27 Mar 2018 Chen Zeno, Itay Golan, Elad Hoffer, Daniel Soudry

However, research for scenarios in which task boundaries are unknown during training has been lacking.

Continual Learning

Norm matters: efficient and accurate normalization schemes in deep networks

4 code implementations NeurIPS 2018 Elad Hoffer, Ron Banner, Itay Golan, Daniel Soudry

Over the past few years, Batch-Normalization has been commonly used in deep networks, allowing faster training and high performance for a wide variety of applications.

Cannot find the paper you are looking for? You can Submit a new open access paper.