Search Results for author: Qingyan Meng

Found 6 papers, 6 papers with code

Hebbian Learning based Orthogonal Projection for Continual Learning of Spiking Neural Networks

1 code implementation19 Feb 2024 Mingqing Xiao, Qingyan Meng, Zongpeng Zhang, Di He, Zhouchen Lin

Neuromorphic computing with spiking neural networks is promising for energy-efficient artificial intelligence (AI) applications.

Continual Learning

Towards Memory- and Time-Efficient Backpropagation for Training Spiking Neural Networks

1 code implementation ICCV 2023 Qingyan Meng, Mingqing Xiao, Shen Yan, Yisen Wang, Zhouchen Lin, Zhi-Quan Luo

In particular, our method achieves state-of-the-art accuracy on ImageNet, while the memory cost and training time are reduced by more than 70% and 50%, respectively, compared with BPTT.

SPIDE: A Purely Spike-based Method for Training Feedback Spiking Neural Networks

1 code implementation1 Feb 2023 Mingqing Xiao, Qingyan Meng, Zongpeng Zhang, Yisen Wang, Zhouchen Lin

In this paper, we study spike-based implicit differentiation on the equilibrium state (SPIDE) that extends the recently proposed training method, implicit differentiation on the equilibrium state (IDE), for supervised learning with purely spike-based computation, which demonstrates the potential for energy-efficient training of SNNs.

Online Training Through Time for Spiking Neural Networks

1 code implementation9 Oct 2022 Mingqing Xiao, Qingyan Meng, Zongpeng Zhang, Di He, Zhouchen Lin

With OTTT, it is the first time that two mainstream supervised SNN training methods, BPTT with SG and spike representation-based training, are connected, and meanwhile in a biologically plausible form.

Event data classification Gesture Recognition +1

Training High-Performance Low-Latency Spiking Neural Networks by Differentiation on Spike Representation

1 code implementation CVPR 2022 Qingyan Meng, Mingqing Xiao, Shen Yan, Yisen Wang, Zhouchen Lin, Zhi-Quan Luo

In this paper, we propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance that is competitive to ANNs yet with low latency.

Training Feedback Spiking Neural Networks by Implicit Differentiation on the Equilibrium State

1 code implementation NeurIPS 2021 Mingqing Xiao, Qingyan Meng, Zongpeng Zhang, Yisen Wang, Zhouchen Lin

In this work, we consider feedback spiking neural networks, which are more brain-like, and propose a novel training method that does not rely on the exact reverse of the forward computation.

Cannot find the paper you are looking for? You can Submit a new open access paper.