Search Results for author: Chuanjian Liu

Found 6 papers, 1 papers with code

Greedy Network Enlarging

no code implementations31 Jul 2021 Chuanjian Liu, Kai Han, An Xiao, Yiping Deng, Wei zhang, Chunjing Xu, Yunhe Wang

Recent studies on deep convolutional neural networks present a simple paradigm of architecture design, i. e., models with more MACs typically achieve better accuracy, such as EfficientNet and RegNet.

Residual Distillation: Towards Portable Deep Neural Networks without Shortcuts

no code implementations NeurIPS 2020 Guilin Li, Junlei Zhang, Yunhe Wang, Chuanjian Liu, Matthias Tan, Yunfeng Lin, Wei zhang, Jiashi Feng, Tong Zhang

In particular, we propose a novel joint-training framework to train plain CNN by leveraging the gradients of the ResNet counterpart.

Widening and Squeezing: Towards Accurate and Efficient QNNs

no code implementations3 Feb 2020 Chuanjian Liu, Kai Han, Yunhe Wang, Hanting Chen, Qi Tian, Chunjing Xu

Quantization neural networks (QNNs) are very attractive to the industry because their extremely cheap calculation and storage overhead, but their performance is still worse than that of networks with full-precision parameters.

Quantization

Attribute Aware Pooling for Pedestrian Attribute Recognition

no code implementations27 Jul 2019 Kai Han, Yunhe Wang, Han Shu, Chuanjian Liu, Chunjing Xu, Chang Xu

This paper expands the strength of deep convolutional neural networks (CNNs) to the pedestrian attribute recognition problem by devising a novel attribute aware pooling algorithm.

Pedestrian Attribute Recognition

Learning Instance-wise Sparsity for Accelerating Deep Models

no code implementations27 Jul 2019 Chuanjian Liu, Yunhe Wang, Kai Han, Chunjing Xu, Chang Xu

Exploring deep convolutional neural networks of high efficiency and low memory usage is very essential for a wide variety of machine learning tasks.

Data-Free Learning of Student Networks

3 code implementations ICCV 2019 Hanting Chen, Yunhe Wang, Chang Xu, Zhaohui Yang, Chuanjian Liu, Boxin Shi, Chunjing Xu, Chao Xu, Qi Tian

Learning portable neural networks is very essential for computer vision for the purpose that pre-trained heavy deep models can be well applied on edge devices such as mobile phones and micro sensors.

Neural Network Compression

Cannot find the paper you are looking for? You can Submit a new open access paper.