Search Results for author: Chuanjian Liu

Found 14 papers, 8 papers with code

An Empirical Study of Scaling Law for OCR

1 code implementation29 Dec 2023 Miao Rang, Zhenni Bi, Chuanjian Liu, Yunhe Wang, Kai Han

The laws of model size, data volume, computation and model performance have been extensively studied in the field of Natural Language Processing (NLP).

 Ranked #1 on Scene Text Recognition on ICDAR2013 (using extra training data)

Optical Character Recognition Optical Character Recognition (OCR) +1

Boosting Semantic Segmentation from the Perspective of Explicit Class Embeddings

no code implementations ICCV 2023 Yuhe Liu, Chuanjian Liu, Kai Han, Quan Tang, Zengchang Qin

Following this observation, we propose ECENet, a new segmentation paradigm, in which class embeddings are obtained and enhanced explicitly during interacting with multi-stage image features.

Segmentation Semantic Segmentation

Category Feature Transformer for Semantic Segmentation

1 code implementation10 Aug 2023 Quan Tang, Chuanjian Liu, Fagui Liu, Yifan Liu, Jun Jiang, BoWen Zhang, Kai Han, Yunhe Wang

Aggregation of multi-stage features has been revealed to play a significant role in semantic segmentation.

Segmentation Semantic Segmentation

Bi-ViT: Pushing the Limit of Vision Transformer Quantization

no code implementations21 May 2023 Yanjing Li, Sheng Xu, Mingbao Lin, Xianbin Cao, Chuanjian Liu, Xiao Sun, Baochang Zhang

Vision transformers (ViTs) quantization offers a promising prospect to facilitate deploying large pre-trained networks on resource-limited devices.

Binarization Quantization

Redistribution of Weights and Activations for AdderNet Quantization

no code implementations20 Dec 2022 Ying Nie, Kai Han, Haikang Diao, Chuanjian Liu, Enhua Wu, Yunhe Wang

To this end, we first thoroughly analyze the difference on distributions of weights and activations in AdderNet and then propose a new quantization algorithm by redistributing the weights and the activations.

Quantization

Network Amplification With Efficient MACs Allocation

2 code implementations Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops 2022 Chuanjian Liu, Kai Han, An Xiao, Ying Nie, Wei zhang, Yunhe Wang

In particular, the proposed method is used to enlarge models sourced by GhostNet, we achieve state-of-the-art 80. 9% and 84. 3% ImageNet top-1 accuracies under the setting of 600M and 4. 4B MACs, respectively.

Greedy Network Enlarging

1 code implementation31 Jul 2021 Chuanjian Liu, Kai Han, An Xiao, Yiping Deng, Wei zhang, Chunjing Xu, Yunhe Wang

Recent studies on deep convolutional neural networks present a simple paradigm of architecture design, i. e., models with more MACs typically achieve better accuracy, such as EfficientNet and RegNet.

GhostSR: Learning Ghost Features for Efficient Image Super-Resolution

4 code implementations21 Jan 2021 Ying Nie, Kai Han, Zhenhua Liu, Chuanjian Liu, Yunhe Wang

Based on the observation that many features in SISR models are also similar to each other, we propose to use shift operation to generate the redundant features (i. e., ghost features).

Image Super-Resolution

Residual Distillation: Towards Portable Deep Neural Networks without Shortcuts

1 code implementation NeurIPS 2020 Guilin Li, Junlei Zhang, Yunhe Wang, Chuanjian Liu, Matthias Tan, Yunfeng Lin, Wei zhang, Jiashi Feng, Tong Zhang

In particular, we propose a novel joint-training framework to train plain CNN by leveraging the gradients of the ResNet counterpart.

Widening and Squeezing: Towards Accurate and Efficient QNNs

no code implementations3 Feb 2020 Chuanjian Liu, Kai Han, Yunhe Wang, Hanting Chen, Qi Tian, Chunjing Xu

Quantization neural networks (QNNs) are very attractive to the industry because their extremely cheap calculation and storage overhead, but their performance is still worse than that of networks with full-precision parameters.

Quantization

Learning Instance-wise Sparsity for Accelerating Deep Models

no code implementations27 Jul 2019 Chuanjian Liu, Yunhe Wang, Kai Han, Chunjing Xu, Chang Xu

Exploring deep convolutional neural networks of high efficiency and low memory usage is very essential for a wide variety of machine learning tasks.

Attribute Aware Pooling for Pedestrian Attribute Recognition

no code implementations27 Jul 2019 Kai Han, Yunhe Wang, Han Shu, Chuanjian Liu, Chunjing Xu, Chang Xu

This paper expands the strength of deep convolutional neural networks (CNNs) to the pedestrian attribute recognition problem by devising a novel attribute aware pooling algorithm.

Attribute Pedestrian Attribute Recognition

Data-Free Learning of Student Networks

3 code implementations ICCV 2019 Hanting Chen, Yunhe Wang, Chang Xu, Zhaohui Yang, Chuanjian Liu, Boxin Shi, Chunjing Xu, Chao Xu, Qi Tian

Learning portable neural networks is very essential for computer vision for the purpose that pre-trained heavy deep models can be well applied on edge devices such as mobile phones and micro sensors.

Neural Network Compression

Cannot find the paper you are looking for? You can Submit a new open access paper.