1 code implementation • 6 May 2022 • Yuhang Li, Shikuang Deng, Xin Dong, Shi Gu
We demonstrate that our method can handle the SNN conversion with batch normalization layers and effectively preserve the high accuracy even in 32 time steps.
1 code implementation • ICLR 2022 • Shikuang Deng, Yuhang Li, Shanghang Zhang, Shi Gu
Then we introduce the temporal efficient training (TET) approach to compensate for the loss of momentum in the gradient descent with SG so that the training process can converge into flatter minima with better generalizability.
no code implementations • 7 Jan 2022 • Shikuang Deng, Jingwei Li, B. T. Thomas Yeo, Shi Gu
The brain's functional connectivity fluctuates over time instead of remaining steady in a stationary mode even during the resting state.
1 code implementation • 27 Dec 2021 • Shwai He, Shi Gu
Traditionally, the prediction of future stock movements is based on the historical trading record.
no code implementations • NeurIPS 2021 • Yuhang Li, Yufei Guo, Shanghang Zhang, Shikuang Deng, Yongqing Hai, Shi Gu
Based on the introduced finite difference gradient, we propose a new family of Differentiable Spike (Dspike) functions that can adaptively evolve during training to find the optimal shape and smoothness for gradient estimation.
Ranked #4 on
Event data classification
on CIFAR10-DVS
no code implementations • 18 Oct 2021 • Hengji Cui, Dong Wei, Kai Ma, Shi Gu, Yefeng Zheng
In this work, we propose a unified framework for generalized low-shot (one- and few-shot) medical image segmentation based on distance metric learning (DML).
1 code implementation • 13 Jun 2021 • Yuhang Li, Shikuang Deng, Xin Dong, Ruihao Gong, Shi Gu
Moreover, our calibration algorithm can produce SNN with state-of-the-art architecture on the large-scale ImageNet dataset, including MobileNet and RegNet.
1 code implementation • ICLR 2021 • Shikuang Deng, Shi Gu
As an alternative, many efforts have been devoted to converting conventional ANNs into SNNs by copying the weights from ANNs and adjusting the spiking threshold potential of neurons in SNNs.
2 code implementations • ICLR 2021 • Yuhang Li, Ruihao Gong, Xu Tan, Yang Yang, Peng Hu, Qi Zhang, Fengwei Yu, Wei Wang, Shi Gu
To further employ the power of quantization, the mixed precision technique is incorporated in our framework by approximating the inter-layer and intra-layer sensitivity.
no code implementations • ICCV 2021 • Yuhang Li, Feng Zhu, Ruihao Gong, Mingzhu Shen, Xin Dong, Fengwei Yu, Shaoqing Lu, Shi Gu
However, the inversion process only utilizes biased feature statistics stored in one model and is from low-dimension to high-dimension.