L4-Norm Weight Adjustments for Converted Spiking Neural Networks

17 Nov 2021  ·  Jason Allred, Kaushik Roy ·

Spiking Neural Networks (SNNs) are being explored for their potential energy efficiency benefits due to sparse, event-driven computation. Non-spiking artificial neural networks are typically trained with stochastic gradient descent using backpropagation. The calculation of true gradients for backpropagation in spiking neural networks is impeded by the non-differentiable firing events of spiking neurons. On the other hand, using approximate gradients is effective, but computationally expensive over many time steps. One common technique, then, for training a spiking neural network is to train a topologically-equivalent non-spiking network, and then convert it to an spiking network, replacing real-valued inputs with proportionally rate-encoded Poisson spike trains. Converted SNNs function sufficiently well because the mean pre-firing membrane potential of a spiking neuron is proportional to the dot product of the input rate vector and the neuron weight vector, similar to the functionality of a non-spiking network. However, this conversion only considers the mean and not the temporal variance of the membrane potential. As the standard deviation of the pre-firing membrane potential is proportional to the L4-norm of the neuron weight vector, we propose a weight adjustment based on the L4-norm during the conversion process in order to improve classification accuracy of the converted network.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here