Search Results for author: Dar Gilboa

Found 9 papers, 5 papers with code

Deep Networks Provably Classify Data on Curves

no code implementations29 Jul 2021 Tingran Wang, Sam Buchanan, Dar Gilboa, John Wright

Data with low-dimensional nonlinear structure are ubiquitous in engineering and scientific problems.

Marginalizable Density Models

1 code implementation8 Jun 2021 Dar Gilboa, Ari Pakman, Thibault Vatter

Probability density models based on deep networks have achieved remarkable success in modeling complex high-dimensional datasets.

Density Estimation Imputation

Deep Networks and the Multiple Manifold Problem

no code implementations ICLR 2021 Sam Buchanan, Dar Gilboa, John Wright

Our analysis demonstrates concrete benefits of depth and width in the context of a practically-motivated model problem: the depth acts as a fitting resource, with larger depths corresponding to smoother networks that can more readily separate the class manifolds, and the width acts as a statistical resource, enabling concentration of the randomly-initialized network and its gradients.

Beyond Signal Propagation: Is Feature Diversity Necessary in Deep Neural Network Initialization?

1 code implementation ICML 2020 Yaniv Blumenfeld, Dar Gilboa, Daniel Soudry

Deep neural networks are typically initialized with random weights, with variances chosen to facilitate signal propagation and stable gradients.

Is Feature Diversity Necessary in Neural Network Initialization?

1 code implementation11 Dec 2019 Yaniv Blumenfeld, Dar Gilboa, Daniel Soudry

Standard practice in training neural networks involves initializing the weights in an independent fashion.

Wider Networks Learn Better Features

no code implementations25 Sep 2019 Dar Gilboa, Guy Gur-Ari

Transferability of learned features between tasks can massively reduce the cost of training a neural network on a novel task.

A Mean Field Theory of Quantized Deep Networks: The Quantization-Depth Trade-Off

1 code implementation NeurIPS 2019 Yaniv Blumenfeld, Dar Gilboa, Daniel Soudry

Reducing the precision of weights and activation functions in neural network training, with minimal impact on performance, is essential for the deployment of these models in resource-constrained environments.

Quantization

Dynamical Isometry and a Mean Field Theory of LSTMs and GRUs

no code implementations25 Jan 2019 Dar Gilboa, Bo Chang, Minmin Chen, Greg Yang, Samuel S. Schoenholz, Ed H. Chi, Jeffrey Pennington

We demonstrate the efficacy of our initialization scheme on multiple sequence tasks, on which it enables successful training while a standard initialization either fails completely or is orders of magnitude slower.

Stochastic Bouncy Particle Sampler

1 code implementation ICML 2017 Ari Pakman, Dar Gilboa, David Carlson, Liam Paninski

We introduce a novel stochastic version of the non-reversible, rejection-free Bouncy Particle Sampler (BPS), a Markov process whose sample trajectories are piecewise linear.

Cannot find the paper you are looking for? You can Submit a new open access paper.