Training Deep Spiking Neural Networks with Bio-plausible Learning Rules

29 Sep 2021  ·  Yukun Yang, Peng Li ·

There exists a marked cleavage between the biological plausible approaches and the practical backpropagation-based approaches on how to train a deep spiking neural network (DSNN) with better performance. The well-known bio-plausible learning rule Spike-Timing-Dependent-Plasticity (STDP) cannot explain how the brain adjusts synaptic weights to encode information through accurate spike timing, while the widely applied backpropagation (BP) algorithms lack a biologically credible mechanism. In this work, we wish to answer the question of whether it is possible to train a DSNN using bio-plausible learning rules only and reach the comparable accuracy trained by the BP-based learning rules. We observed that the STDP learning rule calculated between the membrane potential waveform in the apical dendrite of a pyramidal cell and its input spike train with the help of local recurrent connections synthesized by somatostatin (SOM) interneurons is able to perform supervised learning. This architecture is also supported by recent observations of the brain's cortical microcircuits. This new view of how spiking neurons may accurately adjust their complex temporal dynamics with the help of special local feedback connections bridges the performance gap between the bio-plausible approaches and the BP-based approaches and provides a possible answer to how our brain learns. We verify our observation with a simplified spiking neuron model and two different cell types on several datasets and further provide theoretical proof of the equivalence between STDP and BP under a special circumstance.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here