The Heidelberg spiking datasets for the systematic evaluation of spiking neural networks

16 Oct 2019  ·  Benjamin Cramer, Yannik Stradmann, Johannes Schemmel, Friedemann Zenke ·

Spiking neural networks are the basis of versatile and power-efficient information processing in the brain. Although we currently lack a detailed understanding of how these networks compute, recently developed optimization techniques allow us to instantiate increasingly complex functional spiking neural networks in-silico. These methods hold the promise to build more efficient non-von-Neumann computing hardware and will offer new vistas in the quest of unraveling brain circuit function. To accelerate the development of such methods, objective ways to compare their performance are indispensable. Presently, however, there are no widely accepted means for comparing the computational performance of spiking neural networks. To address this issue, we introduce two spike-based classification datasets, broadly applicable to benchmark both software and neuromorphic hardware implementations of spiking neural networks. To accomplish this, we developed a general audio-to-spiking conversion procedure inspired by neurophysiology. Further, we applied this conversion to an existing and a novel speech dataset. The latter is the free, high-fidelity, and word-level aligned Heidelberg digit dataset that we created specifically for this study. By training a range of conventional and spiking classifiers, we show that leveraging spike timing information within these datasets is essential for good classification accuracy. These results serve as the first reference for future performance comparisons of spiking neural networks.

PDF Abstract


Introduced in the Paper:


Used in the Paper:

MNIST Speech Commands

Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Audio Classification SHD CNN Percentage correct 92.4 # 4
Audio Classification SHD Recurrent SNN Percentage correct 83.2 # 9


No methods listed for this paper. Add relevant methods here