Search Results for author: Jean-Pierre David

Found 6 papers, 3 papers with code

QGen: On the Ability to Generalize in Quantization Aware Training

no code implementations17 Apr 2024 MohammadHossein AskariHemmat, Ahmadreza Jeddi, Reyhane Askari Hemmat, Ivan Lazarevich, Alexander Hoffman, Sudhakar Sah, Ehsan Saboori, Yvon Savaria, Jean-Pierre David

In this work, we investigate the generalization properties of quantized neural networks, a characteristic that has received little attention despite its implications on model performance.


QReg: On Regularization Effects of Quantization

no code implementations24 Jun 2022 MohammadHossein AskariHemmat, Reyhane Askari Hemmat, Alex Hoffman, Ivan Lazarevich, Ehsan Saboori, Olivier Mastropietro, Yvon Savaria, Jean-Pierre David

To confirm our analytical study, we performed an extensive list of experiments summarized in this paper in which we show that the regularization effects of quantization can be seen in various vision tasks and models, over various datasets.


iPrune: A Magnitude Based Unstructured Pruning Method for Efficient Binary Networks in Hardware

no code implementations29 Sep 2021 Adithya Venkateswaran, Jean-Pierre David

Compared to very recent work on pruning for binary networks, we still have a gain of 1% on the precision and up to 30% reduction in memory (526KiB vs 750KiB).

U-Net Fixed-Point Quantization for Medical Image Segmentation

2 code implementations2 Aug 2019 MohammadHossein AskariHemmat, Sina Honari, Lucas Rouhier, Christian S. Perone, Julien Cohen-Adad, Yvon Savaria, Jean-Pierre David

We then apply our quantization algorithm to three datasets: (1) the Spinal Cord Gray Matter Segmentation (GM), (2) the ISBI challenge for segmentation of neuronal structures in Electron Microscopic (EM), and (3) the public National Institute of Health (NIH) dataset for pancreas segmentation in abdominal CT scans.

Image Segmentation Pancreas Segmentation +3

BinaryConnect: Training Deep Neural Networks with binary weights during propagations

5 code implementations NeurIPS 2015 Matthieu Courbariaux, Yoshua Bengio, Jean-Pierre David

We introduce BinaryConnect, a method which consists in training a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated.

Training deep neural networks with low precision multiplications

1 code implementation22 Dec 2014 Matthieu Courbariaux, Yoshua Bengio, Jean-Pierre David

For each of those datasets and for each of those formats, we assess the impact of the precision of the multiplications on the final error after training.

Cannot find the paper you are looking for? You can Submit a new open access paper.