Search Results for author: Shaohui Lin

Found 20 papers, 13 papers with code

DCS-RISR: Dynamic Channel Splitting for Efficient Real-world Image Super-Resolution

no code implementations15 Dec 2022 Junbo Qiao, Shaohui Lin, Yunlun Zhang, Wei Li, Jie Hu, Gaoqi He, Changbo Wang, Lizhuang Ma

Real-world image super-resolution (RISR) has received increased focus for improving the quality of SR images under unknown complex degradation.

Image Super-Resolution SSIM

A Closer Look at Branch Classifiers of Multi-exit Architectures

no code implementations28 Apr 2022 Shaohui Lin, Bo Ji, Rongrong Ji, Angela Yao

Multi-exit architectures consist of a backbone and branch classifiers that offer shortened inference pathways to reduce the run-time of deep neural networks.

HybridCR: Weakly-Supervised 3D Point Cloud Semantic Segmentation via Hybrid Contrastive Regularization

no code implementations CVPR 2022 Mengtian Li, Yuan Xie, Yunhang Shen, Bo Ke, Ruizhi Qiao, Bo Ren, Shaohui Lin, Lizhuang Ma

To address the huge labeling cost in large-scale point cloud semantic segmentation, we propose a novel hybrid contrastive regularization (HybridCR) framework in weakly-supervised setting, which obtains competitive performance compared to its fully-supervised counterpart.

Semantic Segmentation Semantic Similarity +1

Self-supervised Models are Good Teaching Assistants for Vision Transformers

no code implementations29 Sep 2021 Haiyan Wu, Yuting Gao, Ke Li, Yinqi Zhang, Shaohui Lin, Yuan Xie, Xing Sun

These findings motivate us to introduce an self-supervised teaching assistant (SSTA) besides the commonly used supervised teacher to improve the performance of transformers.

Knowledge Distillation

Novelty Detection via Contrastive Learning with Negative Data Augmentation

no code implementations18 Jun 2021 Chengwei Chen, Yuan Xie, Shaohui Lin, Ruizhi Qiao, Jian Zhou, Xin Tan, Yi Zhang, Lizhuang Ma

Moreover, our model is more stable for training in a non-adversarial manner, compared to other adversarial based novelty detection methods.

Contrastive Learning Data Augmentation +2

Towards Compact Single Image Super-Resolution via Contrastive Self-distillation

6 code implementations25 May 2021 Yanbo Wang, Shaohui Lin, Yanyun Qu, Haiyan Wu, Zhizhong Zhang, Yuan Xie, Angela Yao

Convolutional neural networks (CNNs) are highly successful for super-resolution (SR) but often require sophisticated architectures with heavy memory cost and computational overhead, significantly restricts their practical deployments on resource-limited devices.

Image Super-Resolution Single Image Super Resolution +2

DisCo: Remedy Self-supervised Learning on Lightweight Models with Distilled Contrastive Learning

1 code implementation19 Apr 2021 Yuting Gao, Jia-Xin Zhuang, Shaohui Lin, Hao Cheng, Xing Sun, Ke Li, Chunhua Shen

Specifically, we find the final embedding obtained by the mainstream SSL methods contains the most fruitful information, and propose to distill the final embedding to maximally transmit a teacher's knowledge to a lightweight model by constraining the last embedding of the student to be consistent with that of the teacher.

Contrastive Learning Representation Learning +1

Contrastive Learning for Compact Single Image Dehazing

3 code implementations CVPR 2021 Haiyan Wu, Yanyun Qu, Shaohui Lin, Jian Zhou, Ruizhi Qiao, Zhizhong Zhang, Yuan Xie, Lizhuang Ma

In this paper, we propose a novel contrastive regularization (CR) built upon contrastive learning to exploit both the information of hazy images and clear images as negative and positive samples, respectively.

Contrastive Learning Image Dehazing +1

Farewell to Mutual Information: Variational Distillation for Cross-Modal Person Re-Identification

2 code implementations CVPR 2021 Xudong Tian, Zhizhong Zhang, Shaohui Lin, Yanyun Qu, Yuan Xie, Lizhuang Ma

The Information Bottleneck (IB) provides an information theoretic principle for representation learning, by retaining all information relevant for predicting label while minimizing the redundancy.

Cross-Modality Person Re-identification Cross-Modal Person Re-Identification +3

PAMS: Quantized Super-Resolution via Parameterized Max Scale

1 code implementation ECCV 2020 Huixia Li, Chenqian Yan, Shaohui Lin, Xiawu Zheng, Yuchao Li, Baochang Zhang, Fan Yang, Rongrong Ji

Specifically, most state-of-the-art SR models without batch normalization have a large dynamic quantization range, which also serves as another cause of performance drop.

Quantization Super-Resolution +1

Neural network compression via learnable wavelet transforms

1 code implementation20 Apr 2020 Moritz Wolter, Shaohui Lin, Angela Yao

Linear layers still occupy a significant portion of the parameters in recurrent neural networks (RNNs).

Data Compression Neural Network Compression

Training convolutional neural networks with cheap convolutions and online distillation

1 code implementation28 Sep 2019 Jiao Xie, Shaohui Lin, Yichen Zhang, Linkai Luo

The large memory and computation consumption in convolutional neural networks (CNNs) has been one of the main barriers for deploying them on resource-limited systems.

Knowledge Distillation

Interpretable Neural Network Decoupling

no code implementations ECCV 2020 Yuchao Li, Rongrong Ji, Shaohui Lin, Baochang Zhang, Chenqian Yan, Yongjian Wu, Feiyue Huang, Ling Shao

More specifically, we introduce a novel architecture controlling module in each layer to encode the network architecture by a vector.

Network Interpretation

Towards Optimal Structured CNN Pruning via Generative Adversarial Learning

1 code implementation CVPR 2019 Shaohui Lin, Rongrong Ji, Chenqian Yan, Baochang Zhang, Liujuan Cao, Qixiang Ye, Feiyue Huang, David Doermann

In this paper, we propose an effective structured pruning approach that jointly prunes filters as well as other structures in an end-to-end manner.

Towards Compact ConvNets via Structure-Sparsity Regularized Filter Pruning

1 code implementation23 Jan 2019 Shaohui Lin, Rongrong Ji, Yuchao Li, Cheng Deng, Xuelong. Li

In this paper, we propose a novel filter pruning scheme, termed structured sparsity regularization (SSR), to simultaneously speedup the computation and reduce the memory overhead of CNNs, which can be well supported by various off-the-shelf deep learning libraries.

Domain Adaptation object-detection +1

Exploiting Kernel Sparsity and Entropy for Interpretable CNN Compression

1 code implementation CVPR 2019 Yuchao Li, Shaohui Lin, Baochang Zhang, Jianzhuang Liu, David Doermann, Yongjian Wu, Feiyue Huang, Rongrong Ji

The relationship between the input feature maps and 2D kernels is revealed in a theoretical framework, based on which a kernel sparsity and entropy (KSE) indicator is proposed to quantitate the feature map importance in a feature-agnostic manner to guide model compression.

Model Compression

Cannot find the paper you are looking for? You can Submit a new open access paper.