no code implementations • 10 Oct 2024 • Zecheng Hao, Yifan Huang, Zijie Xu, Zhaofei Yu, Tiejun Huang
Spiking Neural Networks (SNNs) are considered to have enormous potential in the future development of Artificial Intelligence (AI) due to their brain-inspired and energy-efficient properties.
no code implementations • 6 Oct 2024 • Qichao Ma, Rui-Jie Zhu, Peiye Liu, Renye Yan, Fahong Zhang, Ling Liang, Meng Li, Zhaofei Yu, Zongwei Wang, Yimao Cai, Tiejun Huang
However, the gap between them exists, where direct assessments of how dataset contributions impact LLM outputs are missing.
no code implementations • 19 Sep 2024 • Xian Zhong, Shengwang Hu, Wenxuan Liu, Wenxin Huang, Jianhao Ding, Zhaofei Yu, Tiejun Huang
In this paper, we propose Hybrid Step-wise Distillation (HSD) method, tailored for neuromorphic datasets, to mitigate the notable decline in performance at lower time steps.
no code implementations • 5 Sep 2024 • Tong Bu, Maohua Li, Zhaofei Yu
This showcases its applicability to both classification and regression tasks.
no code implementations • 14 Jul 2024 • Jiyuan Zhang, Kang Chen, Shiyan Chen, Yajing Zheng, Tiejun Huang, Zhaofei Yu
To address this issue, we make the first attempt to introduce the 3D Gaussian Splatting (3DGS) into spike cameras in high-speed capture, providing 3DGS as dense and continuous clues of views, then constructing SpikeGS.
1 code implementation • 1 Jun 2024 • Lihao Wang, Zhaofei Yu
Spiking Neural Networks (SNNs) emulate the integrated-fire-leak mechanism found in biological neurons, offering a compelling combination of biological realism and energy efficiency.
no code implementations • 1 Jun 2024 • Baoyue Zhang, Yajing Zheng, Shiyan Chen, Jiyuan Zhang, Kang Chen, Zhaofei Yu, Tiejun Huang
This innovative approach comprehensively records temporal and spatial visual information, rendering it particularly suitable for magnifying high-speed micro-motions. This paper introduces SpikeMM, a pioneering spike-based algorithm tailored specifically for high-speed motion magnification.
1 code implementation • 31 May 2024 • Jianhao Ding, Zhiyu Pan, Yujia Liu, Zhaofei Yu, Tiejun Huang
We present that membrane potential perturbation dynamics can reliably convey the intensity of perturbation.
no code implementations • 30 May 2024 • Yujia Liu, Tong Bu, Jianhao Ding, Zecheng Hao, Tiejun Huang, Zhaofei Yu
In this paper, we propose a novel approach to enhance the robustness of SNNs through gradient sparsity regularization.
no code implementations • 21 May 2024 • Yuanhong Tang, Shanshan Jia, Tiejun Huang, Zhaofei Yu, Jian K. Liu
A single neuron receives an extensive array of synaptic inputs through its dendrites, raising the fundamental question of how these inputs undergo integration and summation, culminating in the initiation of spikes in the soma.
no code implementations • 26 Apr 2024 • Zhipeng Huang, Jianhao Ding, Zhiyu Pan, Haoran Li, Ying Fang, Zhaofei Yu, Jian K. Liu
One of the mainstream approaches to implementing deep SNNs is the ANN-SNN conversion, which integrates the efficient training strategy of ANNs with the energy-saving potential and fast inference capability of SNNs.
2 code implementations • CVPR 2024 • Xinyu Shi, Zecheng Hao, Zhaofei Yu
Based on DSSA, we propose a novel spiking Vision Transformer architecture called SpikingResformer, which combines the ResNet-based multi-stage architecture with our proposed DSSA to improve both performance and energy efficiency while reducing parameters.
1 code implementation • 14 Mar 2024 • Kang Chen, Shiyan Chen, Jiyuan Zhang, Baoyue Zhang, Yajing Zheng, Tiejun Huang, Zhaofei Yu
Our approach begins with the formulation of a spike-guided deblurring model that explores the theoretical relationships among spike streams, blurry images, and their corresponding sharp sequences.
no code implementations • 1 Feb 2024 • Zecheng Hao, Xinyu Shi, Yujia Liu, Zhaofei Yu, Tiejun Huang
Extensive experimental results have demonstrated that our model can outperform previous state-of-the-art works on various types of datasets, which promote SNNs to achieve a brand-new level of performance comparable to quantized ANNs.
no code implementations • 8 Jan 2024 • Peter Beech, Shanshan Jia, Zhaofei Yu, Jian K. Liu
The visual pathway involves complex networks of cells and regions which contribute to the encoding and processing of visual information.
no code implementations • CVPR 2024 • Yanchen Dong, Ruiqin Xiong, Jian Zhang, Zhaofei Yu, Xiaopeng Fan, Shuyuan Zhu, Tiejun Huang
Experimental results demonstrate that the proposed scheme can reconstruct satisfactory color images with both high temporal and spatial resolution from low-resolution Bayer-pattern spike streams.
1 code implementation • CVPR 2024 • Jiyuan Zhang, Shiyan Chen, Yajing Zheng, Zhaofei Yu, Tiejun Huang
It can supplement the temporal information lost in traditional cameras and guide motion deblurring.
1 code implementation • CVPR 2024 • Rui Zhao, Ruiqin Xiong, Jing Zhao, Jian Zhang, Xiaopeng Fan, Zhaofei Yu, Tiejun Huang
Different from traditional cameras each pixel in spike cameras records the arrival of photons continuously by firing binary spikes at an ultra-fine temporal granularity.
1 code implementation • CVPR 2024 • Changqing Su, Zhiyuan Ye, Yongsheng Xiao, You Zhou, Zhen Cheng, Bo Xiong, Zhaofei Yu, Tiejun Huang
Nevertheless due to disparities in data modality and information characteristics compared to frame stream and event stream the current lack of efficient AC methods has made it challenging for spike cameras to adapt to intricate real-world conditions.
no code implementations • 24 Dec 2023 • Zexiang Yi, Jing Lian, Yunliang Qi, Zhaofei Yu, Huajin Tang, Yide Ma, Jizhao Liu
In this work, we leverage a more biologically plausible neural model with complex dynamics, i. e., a pulse-coupled neural network (PCNN), to improve the expressiveness and recognition performance of SNNs for vision tasks.
no code implementations • 3 Nov 2023 • Bo Xiong, Changqing Su, Zihan Lin, You Zhou, Zhaofei Yu
Here, we propose a neural rendering method for CT reconstruction, named Iterative Neural Adaptive Tomography (INeAT), which incorporates iterative posture optimization to effectively counteract the influence of posture perturbations in data, particularly in cases involving significant posture variations.
1 code implementation • 25 Oct 2023 • Wei Fang, Yanqi Chen, Jianhao Ding, Zhaofei Yu, Timothée Masquelier, Ding Chen, Liwei Huang, Huihui Zhou, Guoqi Li, Yonghong Tian
Spiking neural networks (SNNs) aim to realize brain-inspired intelligence on neuromorphic chips with high energy efficiency by introducing neural dynamics and spike properties.
no code implementations • 3 Jul 2023 • Jiyuan Zhang, Shiyan Chen, Yajing Zheng, Zhaofei Yu, Tiejun Huang
To process the spikes, we build a novel model \textbf{SpkOccNet}, in which we integrate information of spikes from continuous viewpoints within multi-windows, and propose a novel cross-view mutual attention mechanism for effective fusion and refinement.
no code implementations • 9 Jun 2023 • Jianhao Ding, Zhaofei Yu, Tiejun Huang, Jian K. Liu
The success of deep learning in the past decade is partially shrouded in the shadow of adversarial attacks.
no code implementations • 15 May 2023 • Jinyang Jiang, Zeliang Zhang, Chenliang Xu, Zhaofei Yu, Yijie Peng
While backpropagation (BP) is the mainstream approach for gradient computation in neural network training, its heavy reliance on the chain rule of differentiation constrains the designing flexibility of network architecture and training pipelines.
1 code implementation • NeurIPS 2023 • Wei Fang, Zhaofei Yu, Zhaokun Zhou, Ding Chen, Yanqi Chen, Zhengyu Ma, Timothée Masquelier, Yonghong Tian
Vanilla spiking neurons in Spiking Neural Networks (SNNs) use charge-fire-reset neuronal dynamics, which can only be simulated serially and can hardly learn long-time dependencies.
no code implementations • 6 Apr 2023 • Liwen Hu, Lei Ma, Zhaofei Yu, Boxin Shi, Tiejun Huang
Based on our noise model, the first benchmark for spike stream denoising is proposed which includes clear (noisy) spike stream.
no code implementations • CVPR 2024 • Shiyan Chen, Jiyuan Zhang, Zhaofei Yu, Tiejun Huang
Based on this, we propose Asymmetric Tunable Blind-Spot Network (AT-BSN), where the blind-spot size can be freely adjusted, thus better balancing noise correlation suppression and image local spatial destruction during training and inference.
1 code implementation • 21 Mar 2023 • Yajing Zheng, Jiyuan Zhang, Rui Zhao, Jianhao Ding, Shiyan Chen, Ruiqin Xiong, Zhaofei Yu, Tiejun Huang
SpikeCV focuses on encapsulation for spike data, standardization for dataset interfaces, modularization for vision tasks, and real-time applications for challenging scenes.
2 code implementations • ICLR 2022 • Tong Bu, Wei Fang, Jianhao Ding, Penglin Dai, Zhaofei Yu, Tiejun Huang
In this paper, we theoretically analyze ANN-SNN conversion error and derive the estimated activation function of SNNs.
1 code implementation • 25 Feb 2023 • Yanqi Chen, Zhengyu Ma, Wei Fang, Xiawu Zheng, Zhaofei Yu, Yonghong Tian
In this work, we reformulate soft threshold pruning as an implicit optimization problem solved using the Iterative Shrinkage-Thresholding Algorithm (ISTA), a classic method from the fields of sparse recovery and compressed sensing.
2 code implementations • 21 Feb 2023 • Zecheng Hao, Jianhao Ding, Tong Bu, Tiejun Huang, Zhaofei Yu
The experimental results show that our proposed method achieves state-of-the-art performance on CIFAR-10, CIFAR-100, and ImageNet datasets.
no code implementations • 17 Feb 2023 • Zeliang Zhang, Jinyang Jiang, Minjie Chen, Zhiyuan Wang, Yijie Peng, Zhaofei Yu
Noise injection-based method has been shown to be able to improve the robustness of artificial neural networks in previous work.
2 code implementations • 4 Feb 2023 • Zecheng Hao, Tong Bu, Jianhao Ding, Tiejun Huang, Zhaofei Yu
Spiking Neural Networks (SNNs) have received extensive academic attention due to the unique properties of low power consumption and high-speed computing on neuromorphic chips.
no code implementations • CVPR 2023 • Siqi Yang, Xuanning Cui, Yongjie Zhu, Jiajun Tang, Si Li, Zhaofei Yu, Boxin Shi
Relighting an outdoor scene is challenging due to the diverse illuminations and salient cast shadows.
1 code implementation • CVPR 2023 • Tong Bu, Jianhao Ding, Zecheng Hao, Zhaofei Yu
Spiking Neural Networks (SNNs) have attracted significant attention due to their energy-efficient properties and potential application on neuromorphic hardware.
1 code implementation • IEEE Transactions on Cybernetics 2022 • Chenxiang Ma, Rui Yan, Zhaofei Yu, Qiang Yu
We then propose two variants that additionally incorporate temporal dependencies through a backward and forward process, respectively.
no code implementations • 3 Feb 2022 • Tong Bu, Jianhao Ding, Zhaofei Yu, Tiejun Huang
We evaluate our algorithm on the CIFAR-10, CIFAR-100 and ImageNet datasets and achieve state-of-the-art accuracy, using fewer time-steps.
no code implementations • 23 Jan 2022 • Tiejun Huang, Yajing Zheng, Zhaofei Yu, Rui Chen, Yuan Li, Ruiqin Xiong, Lei Ma, Junwei Zhao, Siwei Dong, Lin Zhu, Jianing Li, Shanshan Jia, Yihua Fu, Boxin Shi, Si Wu, Yonghong Tian
By treating vidar as spike trains in biological vision, we have further developed a spiking neural network-based machine vision system that combines the speed of the machine and the mechanism of biological vision, achieving high-speed object detection and tracking 1, 000x faster than human vision.
no code implementations • 29 Sep 2021 • Jianhao Ding, Jiyuan Zhang, Zhaofei Yu, Tiejun Huang
Despite that spiking neural networks (SNNs) show strong advantages in information encoding, power consuming, and computational capability, the underdevelopment of supervised learning algorithms is still a hindrance for training SNN.
1 code implementation • 10 Sep 2021 • Ziluo Ding, Rui Zhao, Jiyuan Zhang, Tianxiao Gao, Ruiqin Xiong, Zhaofei Yu, Tiejun Huang
Recently, many deep learning methods have shown great success in providing promising solutions to many event-based problems, such as optical flow estimation.
no code implementations • CVPR 2021 • Yajing Zheng, Lingxiao Zheng, Zhaofei Yu, Boxin Shi, Yonghong Tian, Tiejun Huang
Mimicking the sampling mechanism of the fovea, a retina-inspired camera, named spiking camera, is developed to record the external information with a sampling rate of 40, 000 Hz, and outputs asynchronous binary spike streams.
1 code implementation • 25 May 2021 • Jianhao Ding, Zhaofei Yu, Yonghong Tian, Tiejun Huang
We show that the inference time can be reduced by optimizing the upper bound of the fit curve in the revised ANN to achieve fast inference.
1 code implementation • 11 May 2021 • Yanqi Chen, Zhaofei Yu, Wei Fang, Tiejun Huang, Yonghong Tian
Our key innovation is to redefine the gradient to a new synaptic parameter, allowing better exploration of network structures by taking full advantage of the competition between pruning and regrowth of connections.
1 code implementation • NeurIPS 2021 • Wei Fang, Zhaofei Yu, Yanqi Chen, Tiejun Huang, Timothée Masquelier, Yonghong Tian
Previous Spiking ResNet mimics the standard residual block in ANNs and simply replaces ReLU activation layers with spiking neurons, which suffers the degradation problem and can hardly implement residual learning.
no code implementations • ICCV 2021 • Jing Zhao, Jiyu Xie, Ruiqin Xiong, Jian Zhang, Zhaofei Yu, Tiejun Huang
In this paper, we properly exploit the relative motion and derive the relationship between light intensity and each spike, so as to recover the external scene with both high temporal and high spatial resolution.
1 code implementation • ICCV 2021 • Wei Fang, Zhaofei Yu, Yanqi Chen, Timothee Masquelier, Tiejun Huang, Yonghong Tian
In this paper, we take inspiration from the observation that membrane-related parameters are different across brain regions, and propose a training algorithm that is capable of learning not only the synaptic weights but also the membrane time constants of SNNs.
no code implementations • 30 Apr 2019 • Yichen Zhang, Shanshan Jia, Yajing Zheng, Zhaofei Yu, Yonghong Tian, Siwei Ma, Tiejun Huang, Jian. K. Liu
The SID is an end-to-end decoder with one end as neural spikes and the other end as images, which can be trained directly such that visual scenes are reconstructed from spikes in a highly accurate fashion.
no code implementations • 22 Feb 2019 • Yajing Zheng, Shanshan Jia, Zhaofei Yu, Tiejun Huang, Jian. K. Liu, Yonghong Tian
Recent studies have suggested that the cognitive process of the human brain is realized as probabilistic inference and can be further modeled by probabilistic graphical models like Markov random fields.
no code implementations • 6 Nov 2018 • Qi Yan, Yajing Zheng, Shanshan Jia, Yichen Zhang, Zhaofei Yu, Feng Chen, Yonghong Tian, Tiejun Huang, Jian. K. Liu
When a deep CNN with many layers is used for the visual system, it is not easy to compare the structure components of CNNs with possible neuroscience underpinnings due to highly complex circuits from the retina to higher visual cortex.
no code implementations • 12 Aug 2018 • Shanshan Jia, Zhaofei Yu, Arno Onken, Yonghong Tian, Tiejun Huang, Jian. K. Liu
Furthermore, we show that STNMF can separate spikes of a ganglion cell into a few subsets of spikes where each subset is contributed by one presynaptic bipolar cell.
no code implementations • 2 Aug 2018 • Zhaofei Yu, Yonghong Tian, Tiejun Huang, Jian. K. Liu
Taken together, our results suggest that the WTA circuit could be seen as the minimal inference unit of neuronal circuits.
no code implementations • 8 Nov 2017 • Qi Yan, Zhaofei Yu, Feng Chen, Jian. K. Liu
By training CNNs with white noise images to predicate neural responses, we found that convolutional filters learned in the end are resembling to biological components of the retinal circuit.
no code implementations • 1 Jun 2016 • Zhaofei Yu, David Kappel, Robert Legenstein, Sen Song, Feng Chen, Wolfgang Maass
Our theoretical analysis shows that stochastic search could in principle even attain optimal network configurations by emulating one of the most well-known nonlinear optimization methods, simulated annealing.
no code implementations • 3 Sep 2015 • Zhaofei Yu, Feng Chen, Jianwu Dong, Qionghai Dai
Although the Bayesian causal inference model explains the problem of causal inference in cue combination successfully, how causal inference in cue combination could be implemented by neural circuits, is unclear.