A Basic Compositional Model for Spiking Neural Networks

12 Aug 2018  ·  Nancy Lynch, Cameron Musco ·

We present a formal, mathematical foundation for modeling and reasoning about the behavior of $synchronous$, $stochastic$ $Spiking$ $Neural$ $Networks$ $(SNNs)$, which have been widely used in studies of neural computation. Our approach follows paradigms established in the field of concurrency theory. Our SNN model is based on directed graphs of neurons, classified as input, output, and internal neurons. We focus here on basic SNNs, in which a neuron's only state is a Boolean value indicating whether or not the neuron is currently firing. We also define the $external$ $behavior$ of an SNN, in terms of probability distributions on its external firing patterns. We define two operators on SNNs: a $composition$ $operator$, which supports modeling of SNNs as combinations of smaller SNNs, and a $hiding$ $operator$, which reclassifies some output behavior of an SNN as internal. We prove results showing how the external behavior of a network built using these operators is related to the external behavior of its component networks. Finally, we define the notion of a $problem$ to be solved by an SNN, and show how the composition and hiding operators affect the problems that are solved by the networks. We illustrate our definitions with three examples: a Boolean circuit constructed from gates, an $Attention$ network constructed from a $Winner$-$Take$-$All$ network and a $Filter$ network, and a toy example involving combining two networks in a cyclic fashion.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here