Search Results for author: Damien Querlioz

Found 21 papers, 7 papers with code

Unsupervised End-to-End Training with a Self-Defined Bio-Inspired Target

no code implementations18 Mar 2024 Dongshu Liu, Jérémie Laydevant, Adrien Pontlevy, Damien Querlioz, Julie Grollier

Current unsupervised learning methods depend on end-to-end training via deep learning techniques such as self-supervised learning, with high computational requirements, or employ layer-by-layer training using bio-inspired approaches like Hebbian learning, using local learning rules incompatible with supervised learning.

Self-Supervised Learning

Synaptic metaplasticity with multi-level memristive devices

no code implementations21 Jun 2023 Simone D'Agostino, Filippo Moro, Tifenn Hirtzlin, Julien Arcamone, Niccolò Castellani, Damien Querlioz, Melika Payvand, Elisa Vianello

In this work, we extend this solution to quantized neural networks (QNNs) and present a memristor-based hardware solution for implementing metaplasticity during both inference and training.

Energy Efficient Learning with Low Resolution Stochastic Domain Wall Synapse Based Deep Neural Networks

no code implementations14 Nov 2021 Walid A. Misba, Mark Lozano, Damien Querlioz, Jayasimha Atulasimha

We demonstrate that extremely low resolution quantized (nominally 5-state) synapses with large stochastic variations in Domain Wall (DW) position can be both energy efficient and achieve reasonably high testing accuracies compared to Deep Neural Networks (DNNs) of similar sizes using floating precision synaptic weights.

Quantization

Forecasting the outcome of spintronic experiments with Neural Ordinary Differential Equations

1 code implementation23 Jul 2021 Xing Chen, Flavio Abreu Araujo, Mathieu Riou, Jacob Torrejon, Dafiné Ravelosona, Wang Kang, Weisheng Zhao, Julie Grollier, Damien Querlioz

Here we show that a dynamical neural network, trained on a minimal amount of data, can predict the behavior of spintronic devices with high accuracy and an extremely efficient simulation time, compared to the micromagnetic simulations that are usually employed to model them.

Model of the Weak Reset Process in HfOx Resistive Memory for Deep Learning Frameworks

no code implementations2 Jul 2021 Atreya Majumdar, Marc Bocquet, Tifenn Hirtzlin, Axel Laborieux, Jacques-Olivier Klein, Etienne Nowak, Elisa Vianello, Jean-Michel Portal, Damien Querlioz

However, the resistive change behavior in this regime suffers many fluctuations and is particularly challenging to model, especially in a way compatible with tools used for simulating deep learning.

Handwritten Digit Recognition

Training Dynamical Binary Neural Networks with Equilibrium Propagation

1 code implementation CVPR Workshop Binary Vision 2021 Jérémie Laydevant, Maxence Ernoult, Damien Querlioz, Julie Grollier

We first train systems with binary weights and full-precision activations, achieving an accuracy equivalent to that of full-precision models trained by standard EP on MNIST, and losing only 1. 9% accuracy on CIFAR-10 with equal architecture.

Synaptic metaplasticity in binarized neural networks

2 code implementations19 Jan 2021 Axel Laborieux, Maxence Ernoult, Tifenn Hirtzlin, Damien Querlioz

Unlike the brain, artificial neural networks, including state-of-the-art deep neural networks for computer vision, are subject to "catastrophic forgetting": they rapidly forget the previous task when trained on a new one.

Scaling Equilibrium Propagation to Deep ConvNets by Drastically Reducing its Gradient Estimator Bias

no code implementations14 Jan 2021 Axel Laborieux, Maxence Ernoult, Benjamin Scellier, Yoshua Bengio, Julie Grollier, Damien Querlioz

Equilibrium Propagation (EP) is a biologically-inspired counterpart of Backpropagation Through Time (BPTT) which, owing to its strong theoretical guarantees and the locality in space of its learning rule, fosters the design of energy-efficient hardware dedicated to learning.

EqSpike: Spike-driven Equilibrium Propagation for Neuromorphic Implementations

no code implementations15 Oct 2020 Erwann Martin, Maxence Ernoult, Jérémie Laydevant, Shuai Li, Damien Querlioz, Teodora Petrisor, Julie Grollier

Finding spike-based learning algorithms that can be implemented within the local constraints of neuromorphic systems, while achieving high accuracy, remains a formidable challenge.

In-Memory Resistive RAM Implementation of Binarized Neural Networks for Medical Applications

no code implementations20 Jun 2020 Bogdan Penkovsky, Marc Bocquet, Tifenn Hirtzlin, Jacques-Olivier Klein, Etienne Nowak, Elisa Vianello, Jean-Michel Portal, Damien Querlioz

With new memory technology available, emerging Binarized Neural Networks (BNNs) are promising to reduce the energy impact of the forthcoming machine learning hardware generation, enabling machine learning on the edge devices and avoiding data transfer over the network.

BIG-bench Machine Learning

Scaling Equilibrium Propagation to Deep ConvNets by Drastically Reducing its Gradient Estimator Bias

1 code implementation6 Jun 2020 Axel Laborieux, Maxence Ernoult, Benjamin Scellier, Yoshua Bengio, Julie Grollier, Damien Querlioz

In this work, we show that a bias in the gradient estimate of EP, inherent in the use of finite nudging, is responsible for this phenomenon and that cancelling it allows training deep ConvNets by EP.

Equilibrium Propagation with Continual Weight Updates

no code implementations29 Apr 2020 Maxence Ernoult, Julie Grollier, Damien Querlioz, Yoshua Bengio, Benjamin Scellier

However, in existing implementations of EP, the learning rule is not local in time: the weight update is performed after the dynamics of the second phase have converged and requires information of the first phase that is no longer available physically.

Continual Weight Updates and Convolutional Architectures for Equilibrium Propagation

no code implementations29 Apr 2020 Maxence Ernoult, Julie Grollier, Damien Querlioz, Yoshua Bengio, Benjamin Scellier

On the other hand, the biological plausibility of EP is limited by the fact that its learning rule is not local in time: the synapse update is performed after the dynamics of the second phase have converged and requires information of the first phase that is no longer available physically.

Synaptic Metaplasticity in Binarized Neural Networks

1 code implementation7 Mar 2020 Axel Laborieux, Maxence Ernoult, Tifenn Hirtzlin, Damien Querlioz

In this work, we interpret the hidden weights used by binarized neural networks, a low-precision version of deep neural networks, as metaplastic variables, and modify their training technique to alleviate forgetting.

Implementing Binarized Neural Networks with Magnetoresistive RAM without Error Correction

no code implementations12 Aug 2019 Tifenn Hirtzlin, Bogdan Penkovsky, Jacques-Olivier Klein, Nicolas Locatelli, Adrien F. Vincent, Marc Bocquet, Jean-Michel Portal, Damien Querlioz

One of the most exciting applications of Spin Torque Magnetoresistive Random Access Memory (ST-MRAM) is the in-memory implementation of deep neural networks, which could allow improving the energy efficiency of Artificial Intelligence by orders of magnitude with regards to its implementation on computers and graphics cards.

Stochastic Computing for Hardware Implementation of Binarized Neural Networks

1 code implementation3 Jun 2019 Tifenn Hirtzlin, Bogdan Penkovsky, Marc Bocquet, Jacques-Olivier Klein, Jean-Michel Portal, Damien Querlioz

In this work, we propose a stochastic computing version of Binarized Neural Networks, where the input is also binarized.

Emerging Technologies

Updates of Equilibrium Prop Match Gradients of Backprop Through Time in an RNN with Static Input

2 code implementations NeurIPS 2019 Maxence Ernoult, Julie Grollier, Damien Querlioz, Yoshua Bengio, Benjamin Scellier

Equilibrium Propagation (EP) is a biologically inspired learning algorithm for convergent recurrent neural networks, i. e. RNNs that are fed by a static input x and settle to a steady state.

Spatio-temporal Learning with Arrays of Analog Nanosynapses

no code implementations12 Sep 2017 Christopher H. Bennett, Damien Querlioz, Jacques-Olivier Klein

By translating the database into the time domain and using variable integration windows, up to 95% classification accuracy is achieved.

Exploiting the Short-term to Long-term Plasticity Transition in Memristive Nanodevice Learning Architectures

no code implementations27 Jun 2016 Christopher H. Bennett, Selina La Barbera, Adrien F. Vincent, Fabien Alibart, Damien Querlioz

This approach outperforms a conventional ELM-inspired system when the first layer is imprinted before training and testing, and especially so when variability in device timing evolution is considered: variability is therefore transformed from an issue to a feature.

Classification General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.