Search Results for author: François Leduc-Primeau

Found 11 papers, 0 papers with code

A Novel Benchmark for Few-Shot Semantic Segmentation in the Era of Foundation Models

no code implementations20 Jan 2024 Reda Bensaid, Vincent Gripon, François Leduc-Primeau, Lukas Mauch, Ghouthi Boukli Hacene, Fabien Cardinaux

In this study, we delve into the quest for identifying the most effective vision foundation models for few-shot semantic segmentation, a critical task in computer vision.

Few-Shot Semantic Segmentation Segmentation +1

SAGE-HB: Swift Adaptation and Generalization in Massive MIMO Hybrid Beamforming

no code implementations19 Jan 2024 Ali Hasanzadeh Karkan, Hamed Hojatian, Jean-François Frigon, François Leduc-Primeau

Deep learning (DL)-based solutions have emerged as promising candidates for beamforming in massive Multiple-Input Multiple-Output (mMIMO) systems.

Data Augmentation Domain Generalization +1

SAMSON: Sharpness-Aware Minimization Scaled by Outlier Normalization for Improving DNN Generalization and Robustness

no code implementations18 Nov 2022 Gonçalo Mordido, Sébastien Henwood, Sarath Chandar, François Leduc-Primeau

In this work, we show that applying sharpness-aware training, by optimizing for both the loss value and loss sharpness, significantly improves robustness to noisy hardware at inference time without relying on any assumptions about the target hardware.

MemSE: Fast MSE Prediction for Noisy Memristor-Based DNN Accelerators

no code implementations3 May 2022 Jonathan Kern, Sébastien Henwood, Gonçalo Mordido, Elsa Dupraz, Abdeldjalil Aïssa-El-Bey, Yvon Savaria, François Leduc-Primeau

Memristors enable the computation of matrix-vector multiplications (MVM) in memory and, therefore, show great potential in highly increasing the energy efficiency of deep neural network (DNN) inference accelerators.

Quantization

Layerwise Noise Maximisation to Train Low-Energy Deep Neural Networks

no code implementations23 Dec 2019 Sébastien Henwood, François Leduc-Primeau, Yvon Savaria

Deep neural networks (DNNs) depend on the storage of a large number of parameters, which consumes an important portion of the energy used during inference.

Training Modern Deep Neural Networks for Memory-Fault Robustness

no code implementations23 Nov 2019 Ghouthi Boukli Hacene, François Leduc-Primeau, Amal Ben Soussia, Vincent Gripon, François Gagnon

Because deep neural networks (DNNs) rely on a large number of parameters and computations, their implementation in energy-constrained systems is challenging.

A Study of Deep Learning Robustness Against Computation Failures

no code implementations18 Apr 2017 Jean-Charles Vialatte, François Leduc-Primeau

For many types of integrated circuits, accepting larger failure rates in computations can be used to improve energy efficiency.

VLSI Implementation of Deep Neural Network Using Integral Stochastic Computing

no code implementations29 Sep 2015 Arash Ardakani, François Leduc-Primeau, Naoya Onizawa, Takahiro Hanyu, Warren J. Gross

We also synthesize the circuits in a 65 nm CMOS technology and we show that the proposed integral stochastic architecture results in up to 21% reduction in energy consumption compared to the binary radix implementation at the same misclassification rate.

Cannot find the paper you are looking for? You can Submit a new open access paper.