Search Results for author: Yvon Savaria

Found 9 papers, 2 papers with code

QGen: On the Ability to Generalize in Quantization Aware Training

no code implementations17 Apr 2024 MohammadHossein AskariHemmat, Ahmadreza Jeddi, Reyhane Askari Hemmat, Ivan Lazarevich, Alexander Hoffman, Sudhakar Sah, Ehsan Saboori, Yvon Savaria, Jean-Pierre David

In this work, we investigate the generalization properties of quantized neural networks, a characteristic that has received little attention despite its implications on model performance.

Statistical Hardware Design With Multi-model Active Learning

no code implementations14 Mar 2023 Alireza Ghaffari, Masoud Asgharian, Yvon Savaria

For instance, in our performance prediction setting, the proposed method needs 65% fewer samples to create the model, and in the design space exploration setting, our proposed method can find the best parameter settings by exploring less than 50 samples.

Active Learning Transfer Learning

QReg: On Regularization Effects of Quantization

no code implementations24 Jun 2022 MohammadHossein AskariHemmat, Reyhane Askari Hemmat, Alex Hoffman, Ivan Lazarevich, Ehsan Saboori, Olivier Mastropietro, Yvon Savaria, Jean-Pierre David

To confirm our analytical study, we performed an extensive list of experiments summarized in this paper in which we show that the regularization effects of quantization can be seen in various vision tasks and models, over various datasets.

Quantization

Mobile-URSONet: an Embeddable Neural Network for Onboard Spacecraft Pose Estimation

1 code implementation4 May 2022 Julien Posso, Guy Bois, Yvon Savaria

Spacecraft pose estimation is an essential computer vision application that can improve the autonomy of in-orbit operations.

Pose Estimation Spacecraft Pose Estimation

MemSE: Fast MSE Prediction for Noisy Memristor-Based DNN Accelerators

no code implementations3 May 2022 Jonathan Kern, Sébastien Henwood, Gonçalo Mordido, Elsa Dupraz, Abdeldjalil Aïssa-El-Bey, Yvon Savaria, François Leduc-Primeau

Memristors enable the computation of matrix-vector multiplications (MVM) in memory and, therefore, show great potential in highly increasing the energy efficiency of deep neural network (DNN) inference accelerators.

Quantization

Rethinking Pareto Frontier for Performance Evaluation of Deep Neural Networks

no code implementations18 Feb 2022 Vahid Partovi Nia, Alireza Ghaffari, Mahdi Zolnouri, Yvon Savaria

We propose to use a multi-dimensional Pareto frontier to re-define the efficiency measure of candidate deep learning models, where several variables such as training cost, inference latency, and accuracy play a relative role in defining a dominant model.

Benchmarking Image Classification

Layerwise Noise Maximisation to Train Low-Energy Deep Neural Networks

no code implementations23 Dec 2019 Sébastien Henwood, François Leduc-Primeau, Yvon Savaria

Deep neural networks (DNNs) depend on the storage of a large number of parameters, which consumes an important portion of the energy used during inference.

U-Net Fixed-Point Quantization for Medical Image Segmentation

2 code implementations2 Aug 2019 MohammadHossein AskariHemmat, Sina Honari, Lucas Rouhier, Christian S. Perone, Julien Cohen-Adad, Yvon Savaria, Jean-Pierre David

We then apply our quantization algorithm to three datasets: (1) the Spinal Cord Gray Matter Segmentation (GM), (2) the ISBI challenge for segmentation of neuronal structures in Electron Microscopic (EM), and (3) the public National Institute of Health (NIH) dataset for pancreas segmentation in abdominal CT scans.

Image Segmentation Pancreas Segmentation +3

Cannot find the paper you are looking for? You can Submit a new open access paper.