Search Results for author: Jongwan Kim

Found 2 papers, 0 papers with code

Energy-efficient Knowledge Distillation for Spiking Neural Networks

no code implementations14 Jun 2021 Dongjin Lee, Seongsik Park, Jongwan Kim, Wuhyeong Doh, Sungroh Yoon

On MNIST dataset, our proposed student SNN achieves up to 0. 09% higher accuracy and produces 65% less spikes compared to the student SNN trained with conventional knowledge distillation method.

Knowledge Distillation Model Compression +1

Cannot find the paper you are looking for? You can Submit a new open access paper.