Search Results for author: Lei Deng

Found 41 papers, 10 papers with code

Boosting Deep Neural Network Efficiency with Dual-Module Inference

no code implementations ICML 2020 Liu Liu, Lei Deng, Zhaodong Chen, yuke wang, Shuangchen Li, Jingwei Zhang, Yihua Yang, Zhenyu Gu, Yufei Ding, Yuan Xie

Using Deep Neural Networks (DNNs) in machine learning tasks is promising in delivering high-quality results but challenging to meet stringent latency requirements and energy constraints because of the memory-bound and the compute-bound execution pattern of DNNs.

ES-ImageNet: A Million Event-Stream Classification Dataset for Spiking Neural Networks

1 code implementation23 Oct 2021 Yihan Lin, Wei Ding, Shaohua Qiang, Lei Deng, Guoqi Li

With event-driven algorithms, especially the spiking neural networks (SNNs), achieving continuous improvement in neuromorphic vision processing, a more challenging event-stream-dataset is urgently needed.

Graph2MDA: a multi-modal variational graph embedding model for predicting microbe-drug associations

1 code implementation14 Aug 2021 Lei Deng, Yibiao Huang, Xuejun Liu, Hui Liu

We evaluated our method on three independent datasets and the experimental results showed that our proposed method outperformed six existing state-of-the-art methods.

Graph Embedding

Dynamic Control for Random Access in Deadline-Constrained Broadcasting

no code implementations6 Aug 2021 Aoyu Gong, Lei Deng, Fang Liu, Yijin Zhang

To overcome the infeasibility in obtaining an optimal or near-optimal scheme from the POMDP framework, we investigate the behaviors of the optimal scheme for two extreme cases in the MDP framework, and leverage intuition gained from these behaviors to propose a heuristic scheme for the realistic environment with TDR close to the maximum achievable TDR in the idealized environment.

H2Learn: High-Efficiency Learning Accelerator for High-Accuracy Spiking Neural Networks

no code implementations25 Jul 2021 Ling Liang, Zheng Qu, Zhaodong Chen, Fengbin Tu, Yujie Wu, Lei Deng, Guoqi Li, Peng Li, Yuan Xie

Although spiking neural networks (SNNs) take benefits from the bio-plausible neural modeling, the low accuracy under the common local synaptic plasticity learning rules limits their application in many practical tasks.

Exploiting Spiking Dynamics with Spatial-temporal Feature Normalization in Graph Learning

no code implementations30 Jun 2021 Mingkun Xu, Yujie Wu, Lei Deng, Faqiang Liu, Guoqi Li, Jing Pei

Biological spiking neurons with intrinsic dynamics underlie the powerful representation and learning capabilities of the brain for processing multimodal information in complex environments.

Graph Attention Graph Learning +1

Towards Efficient Full 8-bit Integer DNN Online Training on Resource-limited Devices without Batch Normalization

no code implementations27 May 2021 Yukuan Yang, Xiaowei Chi, Lei Deng, Tianyi Yan, Feng Gao, Guoqi Li

In summary, the EOQ framework is specially designed for reducing the high cost of convolution and BN in network training, demonstrating a broad application prospect of online training in resource-limited devices.

Model Compression Quantization

Sampling methods for efficient training of graph convolutional networks: A survey

no code implementations10 Mar 2021 Xin Liu, Mingyu Yan, Lei Deng, Guoqi Li, Xiaochun Ye, Dongrui Fan

Graph Convolutional Networks (GCNs) have received significant attention from various research fields due to the excellent performance in learning graph representations.

Redefining Self-Normalization Property

no code implementations1 Jan 2021 Zhaodong Chen, Zhao WeiQin, Lei Deng, Guoqi Li, Yuan Xie

Moreover, analysis on the activation's mean in the forward pass reveals that the self-normalization property gets weaker with larger fan-in of each layer, which explains the performance degradation on large benchmarks like ImageNet.

Data Augmentation

Training and Inference for Integer-Based Semantic Segmentation Network

no code implementations30 Nov 2020 Jiayi Yang, Lei Deng, Yukuan Yang, Yuan Xie, Guoqi Li

However, neural network quantization can be used to reduce computation load while maintaining comparable accuracy and original network structure.

Quantization Semantic Segmentation

Going Deeper With Directly-Trained Larger Spiking Neural Networks

2 code implementations29 Oct 2020 Hanle Zheng, Yujie Wu, Lei Deng, Yifan Hu, Guoqi Li

To this end, we propose a threshold-dependent batch normalization (tdBN) method based on the emerging spatio-temporal backpropagation, termed "STBP-tdBN", enabling direct training of a very deep SNN and the efficient implementation of its inference on neuromorphic hardware.

Rubik: A Hierarchical Architecture for Efficient Graph Learning

no code implementations26 Sep 2020 Xiaobing Chen, yuke wang, Xinfeng Xie, Xing Hu, Abanti Basak, Ling Liang, Mingyu Yan, Lei Deng, Yufei Ding, Zidong Du, Yunji Chen, Yuan Xie

Graph convolutional network (GCN) emerges as a promising direction to learn the inductive representation in graph data commonly used in widespread applications, such as E-commerce, social networks, and knowledge graphs.

Hardware Architecture

Kronecker CP Decomposition with Fast Multiplication for Compressing RNNs

no code implementations21 Aug 2020 Dingheng Wang, Bijiao Wu, Guangshe Zhao, Man Yao, Hengnu Chen, Lei Deng, Tianyi Yan, Guoqi Li

Recurrent neural networks (RNNs) are powerful in the tasks oriented to sequential data, such as natural language processing and video recognition.

Tensor Decomposition Video Recognition

Hybrid Tensor Decomposition in Neural Network Compression

no code implementations29 Jun 2020 Bijiao Wu, Dingheng Wang, Guangshe Zhao, Lei Deng, Guoqi Li

We further theoretically and experimentally discover that the HT format has better performance on compressing weight matrices, while the TT format is more suited for compressing convolutional kernels.

Neural Network Compression Tensor Decomposition

GNNAdvisor: An Adaptive and Efficient Runtime System for GNN Acceleration on GPUs

1 code implementation11 Jun 2020 Yuke Wang, Boyuan Feng, Gushu Li, Shuangchen Li, Lei Deng, Yuan Xie, Yufei Ding

As the emerging trend of graph-based deep learning, Graph Neural Networks (GNNs) excel for their capability to generate high-quality node feature vectors (embeddings).

Distributed, Parallel, and Cluster Computing

Brain-inspired global-local learning incorporated with neuromorphic computing

no code implementations5 Jun 2020 Yujie Wu, Rong Zhao, Jun Zhu, Feng Chen, Mingkun Xu, Guoqi Li, Sen Song, Lei Deng, Guanrui Wang, Hao Zheng, Jing Pei, Youhui Zhang, Mingguo Zhao, Luping Shi

We demonstrate the advantages of this model in multiple different tasks, including few-shot learning, continual learning, and fault-tolerance learning in neuromorphic vision sensors.

Continual Learning Few-Shot Learning

Comparing SNNs and RNNs on Neuromorphic Vision Datasets: Similarities and Differences

1 code implementation2 May 2020 Weihua He, Yujie Wu, Lei Deng, Guoqi Li, Haoyu Wang, Yang Tian, Wei Ding, Wenhui Wang, Yuan Xie

Neuromorphic data, recording frameless spike events, have attracted considerable attention for the spatiotemporal information components and the event-driven processing fashion.


HyGCN: A GCN Accelerator with Hybrid Architecture

no code implementations7 Jan 2020 Mingyu Yan, Lei Deng, Xing Hu, Ling Liang, Yujing Feng, Xiaochun Ye, Zhimin Zhang, Dongrui Fan, Yuan Xie

In this work, we first characterize the hybrid execution patterns of GCNs on Intel Xeon CPU.

Distributed, Parallel, and Cluster Computing

Exploring Adversarial Attack in Spiking Neural Networks with Spike-Compatible Gradient

no code implementations1 Jan 2020 Ling Liang, Xing Hu, Lei Deng, Yujie Wu, Guoqi Li, Yufei Ding, Peng Li, Yuan Xie

Recently, backpropagation through time inspired learning algorithms are widely introduced into SNNs to improve the performance, which brings the possibility to attack the models accurately given Spatio-temporal gradient maps.

Adversarial Attack

A Comprehensive and Modularized Statistical Framework for Gradient Norm Equality in Deep Neural Networks

1 code implementation1 Jan 2020 Zhaodong Chen, Lei Deng, Bangyan Wang, Guoqi Li, Yuan Xie

Powered by our metric and framework, we analyze extensive initialization, normalization, and network structures.

Transfer Learning in General Lensless Imaging through Scattering Media

no code implementations28 Dec 2019 Yukuan Yang, Lei Deng, Peng Jiao, Yansong Chua, Jing Pei, Cheng Ma, Guoqi Li

In summary, this work provides a new solution for lensless imaging through scattering media using transfer learning in DNNs.

Transfer Learning

Compressing 3DCNNs Based on Tensor Train Decomposition

no code implementations8 Dec 2019 Dingheng Wang, Guangshe Zhao, Guoqi Li, Lei Deng, Yang Wu

However, due to the higher dimension of convolutional kernels, the space complexity of 3DCNNs is generally larger than that of traditional two dimensional convolutional neural networks (2DCNNs).

Neural Network Compression

Comprehensive SNN Compression Using ADMM Optimization and Activity Regularization

no code implementations3 Nov 2019 Lei Deng, Yujie Wu, Yifan Hu, Ling Liang, Guoqi Li, Xing Hu, Yufei Ding, Peng Li, Yuan Xie

As well known, the huge memory and compute costs of both artificial neural networks (ANNs) and spiking neural networks (SNNs) greatly hinder their deployment on edge devices with high efficiency.

Model Compression Quantization

Dual-module Inference for Efficient Recurrent Neural Networks

no code implementations25 Sep 2019 Liu Liu, Lei Deng, Shuangchen Li, Jingwei Zhang, Yihua Yang, Zhenyu Gu, Yufei Ding, Yuan Xie

Using Recurrent Neural Networks (RNNs) in sequence modeling tasks is promising in delivering high-quality results but challenging to meet stringent latency requirements because of the memory-bound execution pattern of RNNs.

DashNet: A Hybrid Artificial and Spiking Neural Network for High-speed Object Tracking

no code implementations15 Sep 2019 Zheyu Yang, Yujie Wu, Guanrui Wang, Yukuan Yang, Guoqi Li, Lei Deng, Jun Zhu, Luping Shi

To the best of our knowledge, DashNet is the first framework that can integrate and process ANNs and SNNs in a hybrid paradigm, which provides a novel solution to achieve both effectiveness and efficiency for high-speed object tracking.

Object Tracking

Training High-Performance and Large-Scale Deep Neural Networks with Full 8-bit Integers

2 code implementations5 Sep 2019 Yukuan Yang, Shuang Wu, Lei Deng, Tianyi Yan, Yuan Xie, Guoqi Li

In this way, all the operations in the training and inference can be bit-wise operations, pushing towards faster processing speed, decreased memory cost, and higher energy efficiency.


AccD: A Compiler-based Framework for Accelerating Distance-related Algorithms on CPU-FPGA Platforms

no code implementations26 Aug 2019 Yuke Wang, Boyuan Feng, Gushu Li, Lei Deng, Yuan Xie, Yufei Ding

As a promising solution to boost the performance of distance-related algorithms (e. g., K-means and KNN), FPGA-based acceleration attracts lots of attention, but also comes with numerous challenges.

Distributed, Parallel, and Cluster Computing Programming Languages

Neural Network Model Extraction Attacks in Edge Devices by Hearing Architectural Hints

no code implementations10 Mar 2019 Xing Hu, Ling Liang, Lei Deng, Shuangchen Li, Xinfeng Xie, Yu Ji, Yufei Ding, Chang Liu, Timothy Sherwood, Yuan Xie

As neural networks continue their reach into nearly every aspect of software operations, the details of those networks become an increasingly sensitive subject.

Cryptography and Security Hardware Architecture

TETRIS: TilE-matching the TRemendous Irregular Sparsity

no code implementations NeurIPS 2018 Yu Ji, Ling Liang, Lei Deng, Youyang Zhang, Youhui Zhang, Yuan Xie

Increasing the sparsity granularity can lead to better hardware utilization, but it will compromise the sparsity for maintaining accuracy.

HitNet: Hybrid Ternary Recurrent Neural Network

no code implementations NeurIPS 2018 Peiqi Wang, Xinfeng Xie, Lei Deng, Guoqi Li, Dongsheng Wang, Yuan Xie

For example, we improve the perplexity per word (PPW) of a ternary LSTM on Penn Tree Bank (PTB) corpus from 126 (the state-of-the-art result to the best of our knowledge) to 110. 3 with a full precision model in 97. 2, and a ternary GRU from 142 to 113. 5 with a full precision model in 102. 7.


Batch Normalization Sampling

no code implementations25 Oct 2018 Zhaodong Chen, Lei Deng, Guoqi Li, Jiawei Sun, Xing Hu, Xin Ma, Yuan Xie

In this paper, we propose alleviating this problem through sampling only a small fraction of data for normalization at each iteration.

Dynamic Sparse Graph for Efficient Deep Learning

no code implementations ICLR 2019 Liu Liu, Lei Deng, Xing Hu, Maohua Zhu, Guoqi Li, Yufei Ding, Yuan Xie

We propose to execute deep neural networks (DNNs) with dynamic and sparse graph (DSG) structure for compressive memory and accelerative execution during both training and inference.

Dimensionality Reduction

Direct Training for Spiking Neural Networks: Faster, Larger, Better

no code implementations16 Sep 2018 Yujie Wu, Lei Deng, Guoqi Li, Jun Zhu, Luping Shi

Spiking neural networks (SNNs) that enables energy efficient implementation on emerging neuromorphic hardware are gaining more attention.

Crossbar-aware neural network pruning

no code implementations25 Jul 2018 Ling Liang, Lei Deng, Yueling Zeng, Xing Hu, Yu Ji, Xin Ma, Guoqi Li, Yuan Xie

Crossbar architecture based devices have been widely adopted in neural network accelerators by taking advantage of the high efficiency on vector-matrix multiplication (VMM) operations.

Network Pruning

L1-Norm Batch Normalization for Efficient Training of Deep Neural Networks

no code implementations27 Feb 2018 Shuang Wu, Guoqi Li, Lei Deng, Liu Liu, Yuan Xie, Luping Shi

Batch Normalization (BN) has been proven to be quite effective at accelerating and improving the training of deep neural networks (DNNs).


Device-to-Device Load Balancing for Cellular Networks

1 code implementation7 Oct 2017 Lei Deng, Yinghui He, Ying Zhang, Minghua Chen, Zongpeng Li, Jack Y. B. Lee, Ying Jun Zhang, Lingyang Song

The idea is to shift traffic from a congested cell to its adjacent under-utilized cells by leveraging inter-cell D2D communication, so that the traffic can be served without using extra spectrum, effectively improving the spectrum temporal efficiency.

Networking and Internet Architecture

Spatio-Temporal Backpropagation for Training High-performance Spiking Neural Networks

1 code implementation8 Jun 2017 Yujie Wu, Lei Deng, Guoqi Li, Jun Zhu, Luping Shi

By simultaneously considering the layer-by-layer spatial domain (SD) and the timing-dependent temporal domain (TD) in the training phase, as well as an approximated derivative for the spike activity, we propose a spatio-temporal backpropagation (STBP) training framework without using any complicated technology.

Object Detection

GXNOR-Net: Training deep neural networks with ternary weights and activations without full-precision memory under a unified discretization framework

1 code implementation25 May 2017 Lei Deng, Peng Jiao, Jing Pei, Zhenzhi Wu, Guoqi Li

Through this way, we build a unified framework that subsumes the binary or ternary networks as its special cases, and under which a heuristic algorithm is provided at the website https://github. com/AcrossV/Gated-XNOR.

Image Set Querying Based Localization

no code implementations20 Sep 2015 Lei Deng, Siyuan Huang, Yueqi Duan, Baohua Chen, Jie zhou

Conventional single image based localization methods usually fail to localize a querying image when there exist large variations between the querying image and the pre-built scene.

Image-Based Localization

Cannot find the paper you are looking for? You can Submit a new open access paper.