Search Results for author: Matthieu Courbariaux

Found 9 papers, 7 papers with code

BitPruning: Learning Bitlengths for Aggressive and Accurate Quantization

no code implementations8 Feb 2020 Miloš Nikolić, Ghouthi Boukli Hacene, Ciaran Bannon, Alberto Delmas Lascorz, Matthieu Courbariaux, Yoshua Bengio, Vincent Gripon, Andreas Moshovos

Neural networks have demonstrably achieved state-of-the art accuracy using low-bitlength integer quantization, yielding both execution time and energy benefits on existing hardware designs that support short bitlengths.

Quantization

Attention Based Pruning for Shift Networks

1 code implementation29 May 2019 Ghouthi Boukli Hacene, Carlos Lassance, Vincent Gripon, Matthieu Courbariaux, Yoshua Bengio

In many application domains such as computer vision, Convolutional Layers (CLs) are key to the accuracy of deep learning methods.

Object Recognition

BNN+: Improved Binary Network Training

no code implementations ICLR 2019 Sajad Darabi, Mouloud Belbahri, Matthieu Courbariaux, Vahid Partovi Nia

Binary neural networks (BNN) help to alleviate the prohibitive resource requirements of DNN, where both activations and weights are limited to 1-bit.

Regularized Binary Network Training

1 code implementation ICLR 2019 Sajad Darabi, Mouloud Belbahri, Matthieu Courbariaux, Vahid Partovi Nia

We propose to improve the binary training method, by introducing a new regularization function that encourages training weights around binary values.

Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations

5 code implementations22 Sep 2016 Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio

Quantized recurrent neural networks were tested over the Penn Treebank dataset, and achieved comparable accuracy as their 32-bit counterparts using only 4-bits.

Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1

26 code implementations9 Feb 2016 Matthieu Courbariaux, Itay Hubara, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio

We introduce a method to train Binarized Neural Networks (BNNs) - neural networks with binary weights and activations at run-time.

BinaryConnect: Training Deep Neural Networks with binary weights during propagations

5 code implementations NeurIPS 2015 Matthieu Courbariaux, Yoshua Bengio, Jean-Pierre David

We introduce BinaryConnect, a method which consists in training a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated.

Training deep neural networks with low precision multiplications

1 code implementation22 Dec 2014 Matthieu Courbariaux, Yoshua Bengio, Jean-Pierre David

For each of those datasets and for each of those formats, we assess the impact of the precision of the multiplications on the final error after training.

Cannot find the paper you are looking for? You can Submit a new open access paper.