no code implementations • 17 Apr 2024 • MohammadHossein AskariHemmat, Ahmadreza Jeddi, Reyhane Askari Hemmat, Ivan Lazarevich, Alexander Hoffman, Sudhakar Sah, Ehsan Saboori, Yvon Savaria, Jean-Pierre David
In this work, we investigate the generalization properties of quantized neural networks, a characteristic that has received little attention despite its implications on model performance.
no code implementations • 24 Jun 2022 • MohammadHossein AskariHemmat, Reyhane Askari Hemmat, Alex Hoffman, Ivan Lazarevich, Ehsan Saboori, Olivier Mastropietro, Yvon Savaria, Jean-Pierre David
To confirm our analytical study, we performed an extensive list of experiments summarized in this paper in which we show that the regularization effects of quantization can be seen in various vision tasks and models, over various datasets.
no code implementations • 29 Sep 2021 • Adithya Venkateswaran, Jean-Pierre David
Compared to very recent work on pruning for binary networks, we still have a gain of 1% on the precision and up to 30% reduction in memory (526KiB vs 750KiB).
2 code implementations • 2 Aug 2019 • MohammadHossein AskariHemmat, Sina Honari, Lucas Rouhier, Christian S. Perone, Julien Cohen-Adad, Yvon Savaria, Jean-Pierre David
We then apply our quantization algorithm to three datasets: (1) the Spinal Cord Gray Matter Segmentation (GM), (2) the ISBI challenge for segmentation of neuronal structures in Electron Microscopic (EM), and (3) the public National Institute of Health (NIH) dataset for pancreas segmentation in abdominal CT scans.
5 code implementations • NeurIPS 2015 • Matthieu Courbariaux, Yoshua Bengio, Jean-Pierre David
We introduce BinaryConnect, a method which consists in training a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated.
Ranked #30 on Image Classification on SVHN
1 code implementation • 22 Dec 2014 • Matthieu Courbariaux, Yoshua Bengio, Jean-Pierre David
For each of those datasets and for each of those formats, we assess the impact of the precision of the multiplications on the final error after training.