1 code implementation • 6 May 2022 • Yuhang Li, Shikuang Deng, Xin Dong, Shi Gu
We demonstrate that our method can handle the SNN conversion with batch normalization layers and effectively preserve the high accuracy even in 32 time steps.
1 code implementation • ICLR 2022 • Shikuang Deng, Yuhang Li, Shanghang Zhang, Shi Gu
Then we introduce the temporal efficient training (TET) approach to compensate for the loss of momentum in the gradient descent with SG so that the training process can converge into flatter minima with better generalizability.
no code implementations • 7 Jan 2022 • Shikuang Deng, Jingwei Li, B. T. Thomas Yeo, Shi Gu
The brain's functional connectivity fluctuates over time instead of remaining steady in a stationary mode even during the resting state.
no code implementations • NeurIPS 2021 • Yuhang Li, Yufei Guo, Shanghang Zhang, Shikuang Deng, Yongqing Hai, Shi Gu
Based on the introduced finite difference gradient, we propose a new family of Differentiable Spike (Dspike) functions that can adaptively evolve during training to find the optimal shape and smoothness for gradient estimation.
Ranked #4 on Event data classification on CIFAR10-DVS
1 code implementation • 13 Jun 2021 • Yuhang Li, Shikuang Deng, Xin Dong, Ruihao Gong, Shi Gu
Moreover, our calibration algorithm can produce SNN with state-of-the-art architecture on the large-scale ImageNet dataset, including MobileNet and RegNet.
1 code implementation • ICLR 2021 • Shikuang Deng, Shi Gu
As an alternative, many efforts have been devoted to converting conventional ANNs into SNNs by copying the weights from ANNs and adjusting the spiking threshold potential of neurons in SNNs.