no code implementations • ICML 2020 • Liu Liu, Lei Deng, Zhaodong Chen, yuke wang, Shuangchen Li, Jingwei Zhang, Yihua Yang, Zhenyu Gu, Yufei Ding, Yuan Xie
Using Deep Neural Networks (DNNs) in machine learning tasks is promising in delivering high-quality results but challenging to meet stringent latency requirements and energy constraints because of the memory-bound and the compute-bound execution pattern of DNNs.
no code implementations • 28 Sep 2022 • Man Yao, Guangshe Zhao, Hengyu Zhang, Yifan Hu, Lei Deng, Yonghong Tian, Bo Xu, Guoqi Li
On ImageNet-1K, we achieve top-1 accuracy of 75. 92% and 77. 08% on single/4-step Res-SNN-104, which are state-of-the-art results in SNNs.
no code implementations • 2 May 2022 • Hui Liu, Yibiao Huang, Xuejun Liu, Lei Deng
We developed a novel molecular graph augmentation strategy, referred to as attention-wise graph mask, to generate challenging positive sample for contrastive learning.
1 code implementation • 25 Apr 2022 • Haojie Huang, Gongming Zhou, Xuejun Liu, Lei Deng, Chen Wu, Dachuan Zhang, Hui Liu
We leveraged contrastive learning on large-scale unannotated WSIs to derive slide-level histopathological feature in latent space, and then transfer it to tumor diagnosis and prediction of differentially expressed cancer driver genes.
no code implementations • 12 Apr 2022 • Ling Liang, Kaidi Xu, Xing Hu, Lei Deng, Yuan Xie
To the best of our knowledge, this is the first analysis on robust training of SNNs.
no code implementations • 14 Mar 2022 • Arindam Basu, Charlotte Frenkel, Lei Deng, Xueyong Zhang
In this paper, we reviewed Spiking neural network (SNN) integrated circuit designs and analyzed the trends among mixed-signal cores, fully digital cores and large-scale, multi-core designs.
no code implementations • 21 Feb 2022 • Zhijun Zeng, Zhen Hou, Ting Li, Lei Deng, Jianguo Hou, Xinran Huang, Jun Li, Meirou Sun, Yunhan Wang, Qiyu Wu, Wenhao Zheng, Hua Jiang, Qi Wang
We develop a deep learning approach to predicting a set of ventilator parameters for a mechanically ventilated septic patient using a long and short term memory (LSTM) recurrent neural network (RNN) model.
no code implementations • 10 Feb 2022 • Xin Liu, Mingyu Yan, Lei Deng, Guoqi Li, Xiaochun Ye, Dongrui Fan, Shirui Pan, Yuan Xie
Next, we provide comparisons from aspects of the efficiency and characteristics of these methods.
1 code implementation • 15 Dec 2021 • Yifan Hu, Lei Deng, Yujie Wu, Man Yao, Guoqi Li
Despite the rapid progress of neuromorphic computing, inadequate capacity and insufficient representation power of spiking neural networks (SNNs) severely restrict their application scope in practice.
no code implementations • 9 Dec 2021 • Yifan Hu, Yujie Wu, Lei Deng, Guoqi Li
In this paper, we identify the crux and then propose a novel residual block for SNNs, which is able to significantly extend the depth of directly trained SNNs, e. g., up to 482 layers on CIFAR-10 and 104 layers on ImageNet, without observing any slight degradation problem.
2 code implementations • 23 Oct 2021 • Yihan Lin, Wei Ding, Shaohua Qiang, Lei Deng, Guoqi Li
With event-driven algorithms, especially the spiking neural networks (SNNs), achieving continuous improvement in neuromorphic vision processing, a more challenging event-stream-dataset is urgently needed.
1 code implementation • 14 Aug 2021 • Lei Deng, Yibiao Huang, Xuejun Liu, Hui Liu
We evaluated our method on three independent datasets and the experimental results showed that our proposed method outperformed six existing state-of-the-art methods.
no code implementations • 6 Aug 2021 • Aoyu Gong, Lei Deng, Fang Liu, Yijin Zhang
To overcome the infeasibility in obtaining an optimal or near-optimal scheme from the POMDP framework, we investigate the behaviors of the optimal scheme for two extreme cases in the MDP framework, and leverage intuition gained from these behaviors to propose a heuristic scheme for the realistic environment with TDR close to the maximum achievable TDR in the idealized environment.
no code implementations • 25 Jul 2021 • Ling Liang, Zheng Qu, Zhaodong Chen, Fengbin Tu, Yujie Wu, Lei Deng, Guoqi Li, Peng Li, Yuan Xie
Although spiking neural networks (SNNs) take benefits from the bio-plausible neural modeling, the low accuracy under the common local synaptic plasticity learning rules limits their application in many practical tasks.
no code implementations • 30 Jun 2021 • Mingkun Xu, Yujie Wu, Lei Deng, Faqiang Liu, Guoqi Li, Jing Pei
Biological spiking neurons with intrinsic dynamics underlie the powerful representation and learning capabilities of the brain for processing multimodal information in complex environments.
no code implementations • 27 May 2021 • Yukuan Yang, Xiaowei Chi, Lei Deng, Tianyi Yan, Feng Gao, Guoqi Li
In summary, the EOQ framework is specially designed for reducing the high cost of convolution and BN in network training, demonstrating a broad application prospect of online training in resource-limited devices.
no code implementations • 10 Mar 2021 • Xin Liu, Mingyu Yan, Lei Deng, Guoqi Li, Xiaochun Ye, Dongrui Fan
Graph Convolutional Networks (GCNs) have received significant attention from various research fields due to the excellent performance in learning graph representations.
no code implementations • 1 Jan 2021 • Zhaodong Chen, Zhao WeiQin, Lei Deng, Guoqi Li, Yuan Xie
Moreover, analysis on the activation's mean in the forward pass reveals that the self-normalization property gets weaker with larger fan-in of each layer, which explains the performance degradation on large benchmarks like ImageNet.
no code implementations • 30 Nov 2020 • Jiayi Yang, Lei Deng, Yukuan Yang, Yuan Xie, Guoqi Li
However, neural network quantization can be used to reduce computation load while maintaining comparable accuracy and original network structure.
2 code implementations • 29 Oct 2020 • Hanle Zheng, Yujie Wu, Lei Deng, Yifan Hu, Guoqi Li
To this end, we propose a threshold-dependent batch normalization (tdBN) method based on the emerging spatio-temporal backpropagation, termed "STBP-tdBN", enabling direct training of a very deep SNN and the efficient implementation of its inference on neuromorphic hardware.
no code implementations • 26 Sep 2020 • Xiaobing Chen, yuke wang, Xinfeng Xie, Xing Hu, Abanti Basak, Ling Liang, Mingyu Yan, Lei Deng, Yufei Ding, Zidong Du, Yunji Chen, Yuan Xie
Graph convolutional network (GCN) emerges as a promising direction to learn the inductive representation in graph data commonly used in widespread applications, such as E-commerce, social networks, and knowledge graphs.
Hardware Architecture
no code implementations • 21 Aug 2020 • Dingheng Wang, Bijiao Wu, Guangshe Zhao, Man Yao, Hengnu Chen, Lei Deng, Tianyi Yan, Guoqi Li
Recurrent neural networks (RNNs) are powerful in the tasks oriented to sequential data, such as natural language processing and video recognition.
no code implementations • 29 Jun 2020 • Bijiao Wu, Dingheng Wang, Guangshe Zhao, Lei Deng, Guoqi Li
We further theoretically and experimentally discover that the HT format has better performance on compressing weight matrices, while the TT format is more suited for compressing convolutional kernels.
1 code implementation • 11 Jun 2020 • Yuke Wang, Boyuan Feng, Gushu Li, Shuangchen Li, Lei Deng, Yuan Xie, Yufei Ding
As the emerging trend of graph-based deep learning, Graph Neural Networks (GNNs) excel for their capability to generate high-quality node feature vectors (embeddings).
Distributed, Parallel, and Cluster Computing
no code implementations • 5 Jun 2020 • Yujie Wu, Rong Zhao, Jun Zhu, Feng Chen, Mingkun Xu, Guoqi Li, Sen Song, Lei Deng, Guanrui Wang, Hao Zheng, Jing Pei, Youhui Zhang, Mingguo Zhao, Luping Shi
We demonstrate the advantages of this model in multiple different tasks, including few-shot learning, continual learning, and fault-tolerance learning in neuromorphic vision sensors.
1 code implementation • 2 May 2020 • Weihua He, Yujie Wu, Lei Deng, Guoqi Li, Haoyu Wang, Yang Tian, Wei Ding, Wenhui Wang, Yuan Xie
Neuromorphic data, recording frameless spike events, have attracted considerable attention for the spatiotemporal information components and the event-driven processing fashion.
Ranked #4 on
Gesture Recognition
on DVS128 Gesture
1 code implementation • 7 Jan 2020 • Mingyu Yan, Lei Deng, Xing Hu, Ling Liang, Yujing Feng, Xiaochun Ye, Zhimin Zhang, Dongrui Fan, Yuan Xie
In this work, we first characterize the hybrid execution patterns of GCNs on Intel Xeon CPU.
Distributed, Parallel, and Cluster Computing
no code implementations • 1 Jan 2020 • Ling Liang, Xing Hu, Lei Deng, Yujie Wu, Guoqi Li, Yufei Ding, Peng Li, Yuan Xie
Recently, backpropagation through time inspired learning algorithms are widely introduced into SNNs to improve the performance, which brings the possibility to attack the models accurately given Spatio-temporal gradient maps.
1 code implementation • 1 Jan 2020 • Zhaodong Chen, Lei Deng, Bangyan Wang, Guoqi Li, Yuan Xie
Powered by our metric and framework, we analyze extensive initialization, normalization, and network structures.
no code implementations • 28 Dec 2019 • Yukuan Yang, Lei Deng, Peng Jiao, Yansong Chua, Jing Pei, Cheng Ma, Guoqi Li
In summary, this work provides a new solution for lensless imaging through scattering media using transfer learning in DNNs.
no code implementations • 8 Dec 2019 • Dingheng Wang, Guangshe Zhao, Guoqi Li, Lei Deng, Yang Wu
However, due to the higher dimension of convolutional kernels, the space complexity of 3DCNNs is generally larger than that of traditional two dimensional convolutional neural networks (2DCNNs).
no code implementations • 3 Nov 2019 • Lei Deng, Yujie Wu, Yifan Hu, Ling Liang, Guoqi Li, Xing Hu, Yufei Ding, Peng Li, Yuan Xie
As well known, the huge memory and compute costs of both artificial neural networks (ANNs) and spiking neural networks (SNNs) greatly hinder their deployment on edge devices with high efficiency.
no code implementations • 25 Sep 2019 • Liu Liu, Lei Deng, Shuangchen Li, Jingwei Zhang, Yihua Yang, Zhenyu Gu, Yufei Ding, Yuan Xie
Using Recurrent Neural Networks (RNNs) in sequence modeling tasks is promising in delivering high-quality results but challenging to meet stringent latency requirements because of the memory-bound execution pattern of RNNs.
no code implementations • 15 Sep 2019 • Zheyu Yang, Yujie Wu, Guanrui Wang, Yukuan Yang, Guoqi Li, Lei Deng, Jun Zhu, Luping Shi
To the best of our knowledge, DashNet is the first framework that can integrate and process ANNs and SNNs in a hybrid paradigm, which provides a novel solution to achieve both effectiveness and efficiency for high-speed object tracking.
2 code implementations • 5 Sep 2019 • Yukuan Yang, Shuang Wu, Lei Deng, Tianyi Yan, Yuan Xie, Guoqi Li
In this way, all the operations in the training and inference can be bit-wise operations, pushing towards faster processing speed, decreased memory cost, and higher energy efficiency.
no code implementations • 26 Aug 2019 • Yuke Wang, Boyuan Feng, Gushu Li, Lei Deng, Yuan Xie, Yufei Ding
As a promising solution to boost the performance of distance-related algorithms (e. g., K-means and KNN), FPGA-based acceleration attracts lots of attention, but also comes with numerous challenges.
Distributed, Parallel, and Cluster Computing Programming Languages
no code implementations • ICLR 2019 • Zhaodong Chen, Lei Deng, Guoqi Li, Jiawei Sun, Xing Hu, Ling Liang, YufeiDing, Yuan Xie
We identify that the effectiveness expects less data correlation while the efficiency expects regular execution pattern.
no code implementations • 10 Mar 2019 • Xing Hu, Ling Liang, Lei Deng, Shuangchen Li, Xinfeng Xie, Yu Ji, Yufei Ding, Chang Liu, Timothy Sherwood, Yuan Xie
As neural networks continue their reach into nearly every aspect of software operations, the details of those networks become an increasingly sensitive subject.
Cryptography and Security Hardware Architecture
no code implementations • NeurIPS 2018 • Yu Ji, Ling Liang, Lei Deng, Youyang Zhang, Youhui Zhang, Yuan Xie
Increasing the sparsity granularity can lead to better hardware utilization, but it will compromise the sparsity for maintaining accuracy.
no code implementations • NeurIPS 2018 • Peiqi Wang, Xinfeng Xie, Lei Deng, Guoqi Li, Dongsheng Wang, Yuan Xie
For example, we improve the perplexity per word (PPW) of a ternary LSTM on Penn Tree Bank (PTB) corpus from 126 (the state-of-the-art result to the best of our knowledge) to 110. 3 with a full precision model in 97. 2, and a ternary GRU from 142 to 113. 5 with a full precision model in 102. 7.
no code implementations • 25 Oct 2018 • Zhaodong Chen, Lei Deng, Guoqi Li, Jiawei Sun, Xing Hu, Xin Ma, Yuan Xie
In this paper, we propose alleviating this problem through sampling only a small fraction of data for normalization at each iteration.
no code implementations • ICLR 2019 • Liu Liu, Lei Deng, Xing Hu, Maohua Zhu, Guoqi Li, Yufei Ding, Yuan Xie
We propose to execute deep neural networks (DNNs) with dynamic and sparse graph (DSG) structure for compressive memory and accelerative execution during both training and inference.
no code implementations • 16 Sep 2018 • Yujie Wu, Lei Deng, Guoqi Li, Jun Zhu, Luping Shi
Spiking neural networks (SNNs) that enables energy efficient implementation on emerging neuromorphic hardware are gaining more attention.
no code implementations • 25 Jul 2018 • Ling Liang, Lei Deng, Yueling Zeng, Xing Hu, Yu Ji, Xin Ma, Guoqi Li, Yuan Xie
Crossbar architecture based devices have been widely adopted in neural network accelerators by taking advantage of the high efficiency on vector-matrix multiplication (VMM) operations.
no code implementations • 27 Feb 2018 • Shuang Wu, Guoqi Li, Lei Deng, Liu Liu, Yuan Xie, Luping Shi
Batch Normalization (BN) has been proven to be quite effective at accelerating and improving the training of deep neural networks (DNNs).
no code implementations • 13 Jan 2018 • Sheng-Kai Liao, Wen-Qi Cai, Johannes Handsteiner, Bo Liu, Juan Yin, Liang Zhang, Dominik Rauch, Matthias Fink, Ji-Gang Ren, Wei-Yue Liu, Yang Li, Qi Shen, Yuan Cao, Feng-Zhi Li, Jian-Feng Wang, Yong-Mei Huang, Lei Deng, Tao Xi, Lu Ma, Tai Hu, Li Li, Nai-Le Liu, Franz Koidl, Peiyuan Wang, Yu-Ao Chen, Xiang-Bin Wang, Michael Steindorfer, Georg Kirchner, Chao-Yang Lu, Rong Shu, Rupert Ursin, Thomas Scheidl, Cheng-Zhi Peng, Jian-Yu Wang, Anton Zeilinger, Jian-Wei Pan
This was on the one hand the transmission of images in a one-time pad configuration from China to Austria as well as from Austria to China.
Quantum Physics
1 code implementation • 7 Oct 2017 • Lei Deng, Yinghui He, Ying Zhang, Minghua Chen, Zongpeng Li, Jack Y. B. Lee, Ying Jun Zhang, Lingyang Song
The idea is to shift traffic from a congested cell to its adjacent under-utilized cells by leveraging inter-cell D2D communication, so that the traffic can be served without using extra spectrum, effectively improving the spectrum temporal efficiency.
Networking and Internet Architecture
1 code implementation • 8 Jun 2017 • Yujie Wu, Lei Deng, Guoqi Li, Jun Zhu, Luping Shi
By simultaneously considering the layer-by-layer spatial domain (SD) and the timing-dependent temporal domain (TD) in the training phase, as well as an approximated derivative for the spike activity, we propose a spatio-temporal backpropagation (STBP) training framework without using any complicated technology.
1 code implementation • 25 May 2017 • Lei Deng, Peng Jiao, Jing Pei, Zhenzhi Wu, Guoqi Li
Through this way, we build a unified framework that subsumes the binary or ternary networks as its special cases, and under which a heuristic algorithm is provided at the website https://github. com/AcrossV/Gated-XNOR.
no code implementations • 20 Sep 2015 • Lei Deng, Siyuan Huang, Yueqi Duan, Baohua Chen, Jie zhou
Conventional single image based localization methods usually fail to localize a querying image when there exist large variations between the querying image and the pre-built scene.