We investigate the design choices used in the previous studies in terms of the accuracy and number of spikes and figure out that they are not best-suited for SNNs.
We implemented large-batch synchronous training of DNNs based on Caffe, a deep learning library.
Sentence-level relation extraction mainly aims to classify the relation between two entities in a sentence.
Ranked #1 on Relation Extraction on Re-TACRED
On MNIST dataset, our proposed student SNN achieves up to 0. 09% higher accuracy and produces 65% less spikes compared to the student SNN trained with conventional knowledge distillation method.
The proposed model finds n-to-1 subject-object relations using a forward object decoder.
Ranked #1 on Relation Extraction on ACE 2005 (Relation classification F1 metric)
Spiking neural networks (SNNs) have gained considerable interest due to their energy-efficient characteristics, yet lack of a scalable training algorithm has restricted their applicability in practical machine learning problems.
Over the past decade, deep neural networks (DNNs) have demonstrated remarkable performance in a variety of applications.
The spiking neural networks (SNNs) are considered as one of the most promising artificial neural networks due to their energy efficient computing capability.
In this paper, we identify memory addressing (specifically, content-based addressing) as the main reason for the performance degradation and propose a robust quantization method for MANNs to address the challenge.
In order to eliminate this workaround, recently proposed is a new class of SNN named deep spiking networks (DSNs), which can be trained directly (without a mapping from conventional deep networks) by error backpropagation with stochastic gradient descent.
The second is the popularity of NAND flash-based solid-state drives (SSDs) containing multicore processors that can accommodate extra computation for data processing.