1 code implementation • 20 Jan 2024 • Reda Bensaid, Vincent Gripon, François Leduc-Primeau, Lukas Mauch, Ghouthi Boukli Hacene, Fabien Cardinaux
In recent years, the rapid evolution of computer vision has seen the emergence of various foundation models, each tailored to specific data types and tasks.
no code implementations • 19 Jan 2024 • Ali Hasanzadeh Karkan, Hamed Hojatian, Jean-François Frigon, François Leduc-Primeau
Deep learning (DL)-based solutions have emerged as promising candidates for beamforming in massive Multiple-Input Multiple-Output (mMIMO) systems.
no code implementations • 11 Aug 2023 • Hamed Hojatian, Zoubeir Mlika, Jérémy Nadal, Jean-François Frigon, François Leduc-Primeau
First, we propose an energy model for different beamforming structures.
no code implementations • 18 Nov 2022 • Gonçalo Mordido, Sébastien Henwood, Sarath Chandar, François Leduc-Primeau
In this work, we show that applying sharpness-aware training, by optimizing for both the loss value and loss sharpness, significantly improves robustness to noisy hardware at inference time without relying on any assumptions about the target hardware.
no code implementations • 10 Aug 2022 • Hamed Hojatian, Jérémy Nadal, Jean-François Frigon, François Leduc-Primeau
Hybrid beamforming is a promising technology to improve the energy efficiency of massive MIMO systems.
no code implementations • 3 May 2022 • Jonathan Kern, Sébastien Henwood, Gonçalo Mordido, Elsa Dupraz, Abdeldjalil Aïssa-El-Bey, Yvon Savaria, François Leduc-Primeau
Memristors enable the computation of matrix-vector multiplications (MVM) in memory and, therefore, show great potential in highly increasing the energy efficiency of deep neural network (DNN) inference accelerators.
no code implementations • 12 Mar 2020 • Hamed Hojatian, Vu Nguyen Ha, Jérémy Nadal, Jean-François Frigon, François Leduc-Primeau
Hybrid beamforming is a promising technology for 5G millimetre-wave communications.
no code implementations • 23 Dec 2019 • Sébastien Henwood, François Leduc-Primeau, Yvon Savaria
Deep neural networks (DNNs) depend on the storage of a large number of parameters, which consumes an important portion of the energy used during inference.
no code implementations • 23 Nov 2019 • Ghouthi Boukli Hacene, François Leduc-Primeau, Amal Ben Soussia, Vincent Gripon, François Gagnon
Because deep neural networks (DNNs) rely on a large number of parameters and computations, their implementation in energy-constrained systems is challenging.
no code implementations • 18 Apr 2017 • Jean-Charles Vialatte, François Leduc-Primeau
For many types of integrated circuits, accepting larger failure rates in computations can be used to improve energy efficiency.
no code implementations • 29 Sep 2015 • Arash Ardakani, François Leduc-Primeau, Naoya Onizawa, Takahiro Hanyu, Warren J. Gross
We also synthesize the circuits in a 65 nm CMOS technology and we show that the proposed integral stochastic architecture results in up to 21% reduction in energy consumption compared to the binary radix implementation at the same misclassification rate.