Search Results for author: Yaniv Blumenfeld

Found 5 papers, 3 papers with code

Towards Cheaper Inference in Deep Networks with Lower Bit-Width Accumulators

no code implementations25 Jan 2024 Yaniv Blumenfeld, Itay Hubara, Daniel Soudry

The majority of the research on the quantization of Deep Neural Networks (DNNs) is focused on reducing the precision of tensors visible by high-level frameworks (e. g., weights, activations, and gradients).

Quantization

How do Minimum-Norm Shallow Denoisers Look in Function Space?

no code implementations NeurIPS 2023 Chen Zeno, Greg Ongie, Yaniv Blumenfeld, Nir Weinberger, Daniel Soudry

Neural network (NN) denoisers are an essential building block in many common tasks, ranging from image reconstruction to image generation.

Image Generation Image Reconstruction

Beyond Signal Propagation: Is Feature Diversity Necessary in Deep Neural Network Initialization?

1 code implementation ICML 2020 Yaniv Blumenfeld, Dar Gilboa, Daniel Soudry

Deep neural networks are typically initialized with random weights, with variances chosen to facilitate signal propagation and stable gradients.

Is Feature Diversity Necessary in Neural Network Initialization?

1 code implementation11 Dec 2019 Yaniv Blumenfeld, Dar Gilboa, Daniel Soudry

Standard practice in training neural networks involves initializing the weights in an independent fashion.

A Mean Field Theory of Quantized Deep Networks: The Quantization-Depth Trade-Off

1 code implementation NeurIPS 2019 Yaniv Blumenfeld, Dar Gilboa, Daniel Soudry

Reducing the precision of weights and activation functions in neural network training, with minimal impact on performance, is essential for the deployment of these models in resource-constrained environments.

Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.