Search Results for author: Itay Hubara

Found 21 papers, 15 papers with code

Foldable SuperNets: Scalable Merging of Transformers with Different Initializations and Tasks

1 code implementation2 Oct 2024 Edan Kinderman, Itay Hubara, Haggai Maron, Daniel Soudry

Many recent methods aim to merge neural networks (NNs) with identical architectures trained on different tasks to obtain a single multi-task model.

Knowledge Distillation

Towards Cheaper Inference in Deep Networks with Lower Bit-Width Accumulators

no code implementations25 Jan 2024 Yaniv Blumenfeld, Itay Hubara, Daniel Soudry

The majority of the research on the quantization of Deep Neural Networks (DNNs) is focused on reducing the precision of tensors visible by high-level frameworks (e. g., weights, activations, and gradients).

Quantization

Minimum Variance Unbiased N:M Sparsity for the Neural Gradients

no code implementations21 Mar 2022 Brian Chmiel, Itay Hubara, Ron Banner, Daniel Soudry

We show that while minimization of the MSE works fine for pruning the weights and activations, it catastrophically fails for the neural gradients.

Accelerated Sparse Neural Training: A Provable and Efficient Method to Find N:M Transposable Masks

1 code implementation NeurIPS 2021 Itay Hubara, Brian Chmiel, Moshe Island, Ron Banner, Seffi Naor, Daniel Soudry

Finally, to solve the problem of switching between different structure constraints, we suggest a method to convert a pre-trained model with unstructured sparsity to an N:M fine-grained block sparsity model with little to no training.

MixSize: Training Convnets With Mixed Image Sizes for Improved Accuracy, Speed and Scale Resiliency

2 code implementations1 Jan 2021 Elad Hoffer, Berry Weinstein, Itay Hubara, Tal Ben-Nun, Torsten Hoefler, Daniel Soudry

Although trained on images of a specific size, it is well established that CNNs can be used to evaluate a wide range of image sizes at test time, by adjusting the size of intermediate feature maps.

Training of Quantized Deep Neural Networks using a Magnetic Tunnel Junction-Based Synapse

no code implementations29 Dec 2019 Tzofnat Greenberg Toledo, Ben Perach, Itay Hubara, Daniel Soudry, Shahar Kvatinsky

A recent example is the GXNOR framework for stochastic training of ternary (TNN) and binary (BNN) neural networks.

The Knowledge Within: Methods for Data-Free Model Compression

no code implementations CVPR 2020 Matan Haroush, Itay Hubara, Elad Hoffer, Daniel Soudry

Then, we demonstrate how these samples can be used to calibrate and fine-tune quantized models without using any real data in the process.

Model Compression

Mix & Match: training convnets with mixed image sizes for improved accuracy, speed and scale resiliency

2 code implementations12 Aug 2019 Elad Hoffer, Berry Weinstein, Itay Hubara, Tal Ben-Nun, Torsten Hoefler, Daniel Soudry

Although trained on images of aspecific size, it is well established that CNNs can be used to evaluate a wide range of image sizes at test time, by adjusting the size of intermediate feature maps.

Augment your batch: better training with larger batches

1 code implementation27 Jan 2019 Elad Hoffer, Tal Ben-Nun, Itay Hubara, Niv Giladi, Torsten Hoefler, Daniel Soudry

We analyze the effect of batch augmentation on gradient variance and show that it empirically improves convergence for a wide variety of deep neural networks and datasets.

Scalable Methods for 8-bit Training of Neural Networks

3 code implementations NeurIPS 2018 Ron Banner, Itay Hubara, Elad Hoffer, Daniel Soudry

Armed with this knowledge, we quantize the model parameters, activations and layer gradients to 8-bit, leaving at a higher precision only the final step in the computation of the weight gradients.

Quantization

Train longer, generalize better: closing the generalization gap in large batch training of neural networks

1 code implementation NeurIPS 2017 Elad Hoffer, Itay Hubara, Daniel Soudry

Following this hypothesis we conducted experiments to show empirically that the "generalization gap" stems from the relatively small number of updates rather than the batch size, and can be completely eliminated by adapting the training regime used.

Spatial contrasting for deep unsupervised learning

no code implementations21 Nov 2016 Elad Hoffer, Itay Hubara, Nir Ailon

Convolutional networks have marked their place over the last few years as the best performing model for various visual tasks.

Playing SNES in the Retro Learning Environment

1 code implementation7 Nov 2016 Nadav Bhonker, Shai Rozenberg, Itay Hubara

The environment is expandable, allowing for more video games and consoles to be easily added to the environment, while maintaining the same interface as ALE.

Atari Games Reinforcement Learning +1

Deep unsupervised learning through spatial contrasting

no code implementations2 Oct 2016 Elad Hoffer, Itay Hubara, Nir Ailon

Convolutional networks have marked their place over the last few years as the best performing model for various visual tasks.

Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations

5 code implementations22 Sep 2016 Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio

Quantized recurrent neural networks were tested over the Penn Treebank dataset, and achieved comparable accuracy as their 32-bit counterparts using only 4-bits.

Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1

26 code implementations9 Feb 2016 Matthieu Courbariaux, Itay Hubara, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio

We introduce a method to train Binarized Neural Networks (BNNs) - neural networks with binary weights and activations at run-time.

Binarized Neural Networks

2 code implementations NeurIPS 2016 Itay Hubara, Daniel Soudry, Ran El Yaniv

We introduce a method to train Binarized Neural Networks (BNNs) - neural networks with binary weights and activations at run-time and when computing the parameters' gradient at train-time.

Expectation Backpropagation: Parameter-Free Training of Multilayer Neural Networks with Continuous or Discrete Weights

2 code implementations NeurIPS 2014 Daniel Soudry, Itay Hubara, Ron Meir

Using online EP and the central limit theorem we find an analytical approximation to the Bayes update of this posterior, as well as the resulting Bayes estimates of the weights and outputs.

Binary text classification text-classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.