no code implementations • 8 Feb 2020 • Miloš Nikolić, Ghouthi Boukli Hacene, Ciaran Bannon, Alberto Delmas Lascorz, Matthieu Courbariaux, Yoshua Bengio, Vincent Gripon, Andreas Moshovos
Neural networks have demonstrably achieved state-of-the art accuracy using low-bitlength integer quantization, yielding both execution time and energy benefits on existing hardware designs that support short bitlengths.
1 code implementation • 29 May 2019 • Ghouthi Boukli Hacene, Carlos Lassance, Vincent Gripon, Matthieu Courbariaux, Yoshua Bengio
In many application domains such as computer vision, Convolutional Layers (CLs) are key to the accuracy of deep learning methods.
no code implementations • ICLR 2019 • Sajad Darabi, Mouloud Belbahri, Matthieu Courbariaux, Vahid Partovi Nia
Binary neural networks (BNN) help to alleviate the prohibitive resource requirements of DNN, where both activations and weights are limited to 1-bit.
1 code implementation • ICLR 2019 • Sajad Darabi, Mouloud Belbahri, Matthieu Courbariaux, Vahid Partovi Nia
We propose to improve the binary training method, by introducing a new regularization function that encourages training weights around binary values.
5 code implementations • 22 Sep 2016 • Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio
Quantized recurrent neural networks were tested over the Penn Treebank dataset, and achieved comparable accuracy as their 32-bit counterparts using only 4-bits.
26 code implementations • 9 Feb 2016 • Matthieu Courbariaux, Itay Hubara, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio
We introduce a method to train Binarized Neural Networks (BNNs) - neural networks with binary weights and activations at run-time.
5 code implementations • NeurIPS 2015 • Matthieu Courbariaux, Yoshua Bengio, Jean-Pierre David
We introduce BinaryConnect, a method which consists in training a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated.
Ranked #30 on Image Classification on SVHN
2 code implementations • 11 Oct 2015 • Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, Yoshua Bengio
For most deep learning algorithms training is notoriously time consuming.
1 code implementation • 22 Dec 2014 • Matthieu Courbariaux, Yoshua Bengio, Jean-Pierre David
For each of those datasets and for each of those formats, we assess the impact of the precision of the multiplications on the final error after training.