Search Results for author: Qu Yang

Found 7 papers, 3 papers with code

LC-TTFS: Towards Lossless Network Conversion for Spiking Neural Networks with TTFS Coding

no code implementations23 Oct 2023 Qu Yang, Malu Zhang, Jibin Wu, Kay Chen Tan, Haizhou Li

With TTFS coding, we can achieve up to orders of magnitude saving in computation over ANN and other rate-based SNNs.

Edge-computing Image Classification +2

TC-LIF: A Two-Compartment Spiking Neuron Model for Long-Term Sequential Modelling

1 code implementation25 Aug 2023 Shimin Zhang, Qu Yang, Chenxiang Ma, Jibin Wu, Haizhou Li, Kay Chen Tan

The identification of sensory cues associated with potential opportunities and dangers is frequently complicated by unrelated events that separate useful cues by long delays.

Long Short-term Memory with Two-Compartment Spiking Neuron

no code implementations14 Jul 2023 Shimin Zhang, Qu Yang, Chenxiang Ma, Jibin Wu, Haizhou Li, Kay Chen Tan

The identification of sensory cues associated with potential opportunities and dangers is frequently complicated by unrelated events that separate useful cues by long delays.

A Hybrid Neural Coding Approach for Pattern Recognition with Spiking Neural Networks

1 code implementation26 May 2023 Xinyi Chen, Qu Yang, Jibin Wu, Haizhou Li, Kay Chen Tan

As an initial exploration in this direction, we propose a hybrid neural coding and learning framework, which encompasses a neural coding zoo with diverse neural coding schemes discovered in neuroscience.

Image Classification

Training Spiking Neural Networks with Local Tandem Learning

1 code implementation10 Oct 2022 Qu Yang, Jibin Wu, Malu Zhang, Yansong Chua, Xinchao Wang, Haizhou Li

The LTL rule follows the teacher-student learning approach by mimicking the intermediate feature representations of a pre-trained ANN.

Deep Spiking Neural Network with Spike Count based Learning Rule

no code implementations15 Feb 2019 Jibin Wu, Yansong Chua, Malu Zhang, Qu Yang, Guoqi Li, Haizhou Li

Deep spiking neural networks (SNNs) support asynchronous event-driven computation, massive parallelism and demonstrate great potential to improve the energy efficiency of its synchronous analog counterpart.

Cannot find the paper you are looking for? You can Submit a new open access paper.