Search Results for author: Benjamin Scellier

Found 16 papers, 7 papers with code

Equivalence of Equilibrium Propagation and Recurrent Backpropagation

1 code implementation22 Nov 2017 Benjamin Scellier, Yoshua Bengio

Recurrent Backpropagation and Equilibrium Propagation are supervised learning algorithms for fixed point recurrent neural networks which differ in their second phase.

Equilibrium Propagation: Bridging the Gap Between Energy-Based Models and Backpropagation

2 code implementations16 Feb 2016 Benjamin Scellier, Yoshua Bengio

Because the objective function is defined in terms of local perturbations, the second phase of Equilibrium Propagation corresponds to only nudging the prediction (fixed point, or stationary distribution) towards a configuration that reduces prediction error.

Generalization of Equilibrium Propagation to Vector Field Dynamics

3 code implementations14 Aug 2018 Benjamin Scellier, Anirudh Goyal, Jonathan Binas, Thomas Mesnard, Yoshua Bengio

The biological plausibility of the backpropagation algorithm has long been doubted by neuroscientists.

Scaling Equilibrium Propagation to Deep ConvNets by Drastically Reducing its Gradient Estimator Bias

1 code implementation6 Jun 2020 Axel Laborieux, Maxence Ernoult, Benjamin Scellier, Yoshua Bengio, Julie Grollier, Damien Querlioz

In this work, we show that a bias in the gradient estimate of EP, inherent in the use of finite nudging, is responsible for this phenomenon and that cancelling it allows training deep ConvNets by EP.

Updates of Equilibrium Prop Match Gradients of Backprop Through Time in an RNN with Static Input

2 code implementations NeurIPS 2019 Maxence Ernoult, Julie Grollier, Damien Querlioz, Yoshua Bengio, Benjamin Scellier

Equilibrium Propagation (EP) is a biologically inspired learning algorithm for convergent recurrent neural networks, i. e. RNNs that are fed by a static input x and settle to a steady state.

A Fast Algorithm to Simulate Nonlinear Resistive Networks

1 code implementation18 Feb 2024 Benjamin Scellier

In the quest for energy-efficient artificial intelligence systems, resistor networks are attracting interest as an alternative to conventional GPU-based neural networks.

Feedforward Initialization for Fast Inference of Deep Generative Networks is biologically plausible

no code implementations6 Jun 2016 Yoshua Bengio, Benjamin Scellier, Olexa Bilaniuk, Joao Sacramento, Walter Senn

We find conditions under which a simple feedforward computation is a very good initialization for inference, after the input units are clamped to observed values.

Equilibrium Propagation with Continual Weight Updates

no code implementations29 Apr 2020 Maxence Ernoult, Julie Grollier, Damien Querlioz, Yoshua Bengio, Benjamin Scellier

However, in existing implementations of EP, the learning rule is not local in time: the weight update is performed after the dynamics of the second phase have converged and requires information of the first phase that is no longer available physically.

Continual Weight Updates and Convolutional Architectures for Equilibrium Propagation

no code implementations29 Apr 2020 Maxence Ernoult, Julie Grollier, Damien Querlioz, Yoshua Bengio, Benjamin Scellier

On the other hand, the biological plausibility of EP is limited by the fact that its learning rule is not local in time: the synapse update is performed after the dynamics of the second phase have converged and requires information of the first phase that is no longer available physically.

Training End-to-End Analog Neural Networks with Equilibrium Propagation

no code implementations2 Jun 2020 Jack Kendall, Ross Pantone, Kalpana Manickavasagam, Yoshua Bengio, Benjamin Scellier

We introduce a principled method to train end-to-end analog neural networks by stochastic gradient descent.

Scaling Equilibrium Propagation to Deep ConvNets by Drastically Reducing its Gradient Estimator Bias

no code implementations14 Jan 2021 Axel Laborieux, Maxence Ernoult, Benjamin Scellier, Yoshua Bengio, Julie Grollier, Damien Querlioz

Equilibrium Propagation (EP) is a biologically-inspired counterpart of Backpropagation Through Time (BPTT) which, owing to its strong theoretical guarantees and the locality in space of its learning rule, fosters the design of energy-efficient hardware dedicated to learning.

A deep learning theory for neural networks grounded in physics

no code implementations18 Mar 2021 Benjamin Scellier

Traditionally in deep learning, neural networks are differentiable mathematical functions, and the loss gradients required for SGD are computed with the backpropagation algorithm.

Learning Theory

Agnostic Physics-Driven Deep Learning

no code implementations30 May 2022 Benjamin Scellier, Siddhartha Mishra, Yoshua Bengio, Yann Ollivier

This work establishes that a physical system can perform statistical learning without gradient computations, via an Agnostic Equilibrium Propagation (Aeqprop) procedure that combines energy minimization, homeostatic control, and nudging towards the correct response.

A universal approximation theorem for nonlinear resistive networks

no code implementations22 Dec 2023 Benjamin Scellier, Siddhartha Mishra

Resistor networks have recently had a surge of interest as substrates for energy-efficient self-learning machines.

Self-Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.