no code implementations • 18 Mar 2024 • Dongshu Liu, Jérémie Laydevant, Adrien Pontlevy, Damien Querlioz, Julie Grollier
Current unsupervised learning methods depend on end-to-end training via deep learning techniques such as self-supervised learning, with high computational requirements, or employ layer-by-layer training using bio-inspired approaches like Hebbian learning, using local learning rules incompatible with supervised learning.
no code implementations • 15 Dec 2023 • Djohan Bonnet, Tifenn Hirtzlin, Tarcisius Januel, Thomas Dalgaty, Damien Querlioz, Elisa Vianello
Catastrophic forgetting remains a challenge for neural networks, especially in lifelong learning scenarios.
no code implementations • 21 Jun 2023 • Simone D'Agostino, Filippo Moro, Tifenn Hirtzlin, Julien Arcamone, Niccolò Castellani, Damien Querlioz, Melika Payvand, Elisa Vianello
In this work, we extend this solution to quantized neural networks (QNNs) and present a memristor-based hardware solution for implementing metaplasticity during both inference and training.
no code implementations • 21 Mar 2022 • Nikhil Garg, Ismael Balafrej, Terrence C. Stewart, Jean Michel Portal, Marc Bocquet, Damien Querlioz, Dominique Drouin, Jean Rouat, Yann Beilliard, Fabien Alibart
To validate the system-level performance of VDSP, we train a single-layer spiking neural network (SNN) for the recognition of handwritten digits.
no code implementations • 14 Nov 2021 • Walid A. Misba, Mark Lozano, Damien Querlioz, Jayasimha Atulasimha
We demonstrate that extremely low resolution quantized (nominally 5-state) synapses with large stochastic variations in Domain Wall (DW) position can be both energy efficient and achieve reasonably high testing accuracies compared to Deep Neural Networks (DNNs) of similar sizes using floating precision synaptic weights.
1 code implementation • 23 Jul 2021 • Xing Chen, Flavio Abreu Araujo, Mathieu Riou, Jacob Torrejon, Dafiné Ravelosona, Wang Kang, Weisheng Zhao, Julie Grollier, Damien Querlioz
Here we show that a dynamical neural network, trained on a minimal amount of data, can predict the behavior of spintronic devices with high accuracy and an extremely efficient simulation time, compared to the micromagnetic simulations that are usually employed to model them.
no code implementations • 2 Jul 2021 • Atreya Majumdar, Marc Bocquet, Tifenn Hirtzlin, Axel Laborieux, Jacques-Olivier Klein, Etienne Nowak, Elisa Vianello, Jean-Michel Portal, Damien Querlioz
However, the resistive change behavior in this regime suffers many fluctuations and is particularly challenging to model, especially in a way compatible with tools used for simulating deep learning.
1 code implementation • CVPR Workshop Binary Vision 2021 • Jérémie Laydevant, Maxence Ernoult, Damien Querlioz, Julie Grollier
We first train systems with binary weights and full-precision activations, achieving an accuracy equivalent to that of full-precision models trained by standard EP on MNIST, and losing only 1. 9% accuracy on CIFAR-10 with equal architecture.
2 code implementations • 19 Jan 2021 • Axel Laborieux, Maxence Ernoult, Tifenn Hirtzlin, Damien Querlioz
Unlike the brain, artificial neural networks, including state-of-the-art deep neural networks for computer vision, are subject to "catastrophic forgetting": they rapidly forget the previous task when trained on a new one.
no code implementations • 14 Jan 2021 • Axel Laborieux, Maxence Ernoult, Benjamin Scellier, Yoshua Bengio, Julie Grollier, Damien Querlioz
Equilibrium Propagation (EP) is a biologically-inspired counterpart of Backpropagation Through Time (BPTT) which, owing to its strong theoretical guarantees and the locality in space of its learning rule, fosters the design of energy-efficient hardware dedicated to learning.
no code implementations • 15 Oct 2020 • Erwann Martin, Maxence Ernoult, Jérémie Laydevant, Shuai Li, Damien Querlioz, Teodora Petrisor, Julie Grollier
Finding spike-based learning algorithms that can be implemented within the local constraints of neuromorphic systems, while achieving high accuracy, remains a formidable challenge.
no code implementations • 20 Jun 2020 • Bogdan Penkovsky, Marc Bocquet, Tifenn Hirtzlin, Jacques-Olivier Klein, Etienne Nowak, Elisa Vianello, Jean-Michel Portal, Damien Querlioz
With new memory technology available, emerging Binarized Neural Networks (BNNs) are promising to reduce the energy impact of the forthcoming machine learning hardware generation, enabling machine learning on the edge devices and avoiding data transfer over the network.
1 code implementation • 6 Jun 2020 • Axel Laborieux, Maxence Ernoult, Benjamin Scellier, Yoshua Bengio, Julie Grollier, Damien Querlioz
In this work, we show that a bias in the gradient estimate of EP, inherent in the use of finite nudging, is responsible for this phenomenon and that cancelling it allows training deep ConvNets by EP.
no code implementations • 29 Apr 2020 • Maxence Ernoult, Julie Grollier, Damien Querlioz, Yoshua Bengio, Benjamin Scellier
However, in existing implementations of EP, the learning rule is not local in time: the weight update is performed after the dynamics of the second phase have converged and requires information of the first phase that is no longer available physically.
no code implementations • 29 Apr 2020 • Maxence Ernoult, Julie Grollier, Damien Querlioz, Yoshua Bengio, Benjamin Scellier
On the other hand, the biological plausibility of EP is limited by the fact that its learning rule is not local in time: the synapse update is performed after the dynamics of the second phase have converged and requires information of the first phase that is no longer available physically.
1 code implementation • 7 Mar 2020 • Axel Laborieux, Maxence Ernoult, Tifenn Hirtzlin, Damien Querlioz
In this work, we interpret the hidden weights used by binarized neural networks, a low-precision version of deep neural networks, as metaplastic variables, and modify their training technique to alleviate forgetting.
no code implementations • 12 Aug 2019 • Tifenn Hirtzlin, Bogdan Penkovsky, Jacques-Olivier Klein, Nicolas Locatelli, Adrien F. Vincent, Marc Bocquet, Jean-Michel Portal, Damien Querlioz
One of the most exciting applications of Spin Torque Magnetoresistive Random Access Memory (ST-MRAM) is the in-memory implementation of deep neural networks, which could allow improving the energy efficiency of Artificial Intelligence by orders of magnitude with regards to its implementation on computers and graphics cards.
1 code implementation • 3 Jun 2019 • Tifenn Hirtzlin, Bogdan Penkovsky, Marc Bocquet, Jacques-Olivier Klein, Jean-Michel Portal, Damien Querlioz
In this work, we propose a stochastic computing version of Binarized Neural Networks, where the input is also binarized.
Emerging Technologies
2 code implementations • NeurIPS 2019 • Maxence Ernoult, Julie Grollier, Damien Querlioz, Yoshua Bengio, Benjamin Scellier
Equilibrium Propagation (EP) is a biologically inspired learning algorithm for convergent recurrent neural networks, i. e. RNNs that are fed by a static input x and settle to a steady state.
no code implementations • 12 Sep 2017 • Christopher H. Bennett, Damien Querlioz, Jacques-Olivier Klein
By translating the database into the time domain and using variable integration windows, up to 95% classification accuracy is achieved.
no code implementations • 27 Jun 2016 • Christopher H. Bennett, Selina La Barbera, Adrien F. Vincent, Fabien Alibart, Damien Querlioz
This approach outperforms a conventional ELM-inspired system when the first layer is imprinted before training and testing, and especially so when variability in device timing evolution is considered: variability is therefore transformed from an issue to a feature.