Search Results for author: Shaohui Lin

Found 28 papers, 18 papers with code

Aligning and Prompting Everything All at Once for Universal Visual Perception

1 code implementation4 Dec 2023 Yunhang Shen, Chaoyou Fu, Peixian Chen, Mengdan Zhang, Ke Li, Xing Sun, Yunsheng Wu, Shaohui Lin, Rongrong Ji

However, predominant paradigms, driven by casting instance-level tasks as an object-word alignment, bring heavy cross-modality interaction, which is not effective in prompting object detection and visual grounding.

Object object-detection +6

Farewell to Mutual Information: Variational Distillation for Cross-Modal Person Re-Identification

3 code implementations CVPR 2021 Xudong Tian, Zhizhong Zhang, Shaohui Lin, Yanyun Qu, Yuan Xie, Lizhuang Ma

The Information Bottleneck (IB) provides an information theoretic principle for representation learning, by retaining all information relevant for predicting label while minimizing the redundancy.

Cross-Modality Person Re-identification Cross-Modal Person Re-Identification +3

Towards Compact Single Image Super-Resolution via Contrastive Self-distillation

8 code implementations25 May 2021 Yanbo Wang, Shaohui Lin, Yanyun Qu, Haiyan Wu, Zhizhong Zhang, Yuan Xie, Angela Yao

Convolutional neural networks (CNNs) are highly successful for super-resolution (SR) but often require sophisticated architectures with heavy memory cost and computational overhead, significantly restricts their practical deployments on resource-limited devices.

Image Super-Resolution SSIM +1

Contrastive Learning for Compact Single Image Dehazing

7 code implementations CVPR 2021 Haiyan Wu, Yanyun Qu, Shaohui Lin, Jian Zhou, Ruizhi Qiao, Zhizhong Zhang, Yuan Xie, Lizhuang Ma

In this paper, we propose a novel contrastive regularization (CR) built upon contrastive learning to exploit both the information of hazy images and clear images as negative and positive samples, respectively.

Contrastive Learning Image Dehazing +1

DisCo: Remedy Self-supervised Learning on Lightweight Models with Distilled Contrastive Learning

2 code implementations19 Apr 2021 Yuting Gao, Jia-Xin Zhuang, Shaohui Lin, Hao Cheng, Xing Sun, Ke Li, Chunhua Shen

Specifically, we find the final embedding obtained by the mainstream SSL methods contains the most fruitful information, and propose to distill the final embedding to maximally transmit a teacher's knowledge to a lightweight model by constraining the last embedding of the student to be consistent with that of the teacher.

Contrastive Learning Representation Learning +1

PAMS: Quantized Super-Resolution via Parameterized Max Scale

1 code implementation ECCV 2020 Huixia Li, Chenqian Yan, Shaohui Lin, Xiawu Zheng, Yuchao Li, Baochang Zhang, Fan Yang, Rongrong Ji

Specifically, most state-of-the-art SR models without batch normalization have a large dynamic quantization range, which also serves as another cause of performance drop.

Quantization Super-Resolution +1

Towards Optimal Structured CNN Pruning via Generative Adversarial Learning

1 code implementation CVPR 2019 Shaohui Lin, Rongrong Ji, Chenqian Yan, Baochang Zhang, Liujuan Cao, Qixiang Ye, Feiyue Huang, David Doermann

In this paper, we propose an effective structured pruning approach that jointly prunes filters as well as other structures in an end-to-end manner.

Exploiting Kernel Sparsity and Entropy for Interpretable CNN Compression

1 code implementation CVPR 2019 Yuchao Li, Shaohui Lin, Baochang Zhang, Jianzhuang Liu, David Doermann, Yongjian Wu, Feiyue Huang, Rongrong Ji

The relationship between the input feature maps and 2D kernels is revealed in a theoretical framework, based on which a kernel sparsity and entropy (KSE) indicator is proposed to quantitate the feature map importance in a feature-agnostic manner to guide model compression.

Clustering Model Compression

Neural network compression via learnable wavelet transforms

1 code implementation20 Apr 2020 Moritz Wolter, Shaohui Lin, Angela Yao

Linear layers still occupy a significant portion of the parameters in recurrent neural networks (RNNs).

Data Compression Neural Network Compression

Training convolutional neural networks with cheap convolutions and online distillation

1 code implementation28 Sep 2019 Jiao Xie, Shaohui Lin, Yichen Zhang, Linkai Luo

The large memory and computation consumption in convolutional neural networks (CNNs) has been one of the main barriers for deploying them on resource-limited systems.

Knowledge Distillation

Towards Compact ConvNets via Structure-Sparsity Regularized Filter Pruning

1 code implementation23 Jan 2019 Shaohui Lin, Rongrong Ji, Yuchao Li, Cheng Deng, Xuelong. Li

In this paper, we propose a novel filter pruning scheme, termed structured sparsity regularization (SSR), to simultaneously speedup the computation and reduce the memory overhead of CNNs, which can be well supported by various off-the-shelf deep learning libraries.

Domain Adaptation object-detection +2

Filter Pruning for Efficient CNNs via Knowledge-driven Differential Filter Sampler

1 code implementation1 Jul 2023 Shaohui Lin, Wenxuan Huang, Jiao Xie, Baochang Zhang, Yunhang Shen, Zhou Yu, Jungong Han, David Doermann

In this paper, we propose a novel Knowledge-driven Differential Filter Sampler~(KDFS) with Masked Filter Modeling~(MFM) framework for filter pruning, which globally prunes the redundant filters based on the prior knowledge of a pre-trained model in a differential and non-alternative optimization.

Image Classification Network Pruning

HybridCR: Weakly-Supervised 3D Point Cloud Semantic Segmentation via Hybrid Contrastive Regularization

1 code implementation CVPR 2022 Mengtian Li, Yuan Xie, Yunhang Shen, Bo Ke, Ruizhi Qiao, Bo Ren, Shaohui Lin, Lizhuang Ma

To address the huge labeling cost in large-scale point cloud semantic segmentation, we propose a novel hybrid contrastive regularization (HybridCR) framework in weakly-supervised setting, which obtains competitive performance compared to its fully-supervised counterpart.

Semantic Segmentation Semantic Similarity +1

Interpretable Neural Network Decoupling

no code implementations ECCV 2020 Yuchao Li, Rongrong Ji, Shaohui Lin, Baochang Zhang, Chenqian Yan, Yongjian Wu, Feiyue Huang, Ling Shao

More specifically, we introduce a novel architecture controlling module in each layer to encode the network architecture by a vector.

Network Interpretation

Novelty Detection via Contrastive Learning with Negative Data Augmentation

no code implementations18 Jun 2021 Chengwei Chen, Yuan Xie, Shaohui Lin, Ruizhi Qiao, Jian Zhou, Xin Tan, Yi Zhang, Lizhuang Ma

Moreover, our model is more stable for training in a non-adversarial manner, compared to other adversarial based novelty detection methods.

Clustering Contrastive Learning +4

Self-supervised Models are Good Teaching Assistants for Vision Transformers

no code implementations29 Sep 2021 Haiyan Wu, Yuting Gao, Ke Li, Yinqi Zhang, Shaohui Lin, Yuan Xie, Xing Sun

These findings motivate us to introduce an self-supervised teaching assistant (SSTA) besides the commonly used supervised teacher to improve the performance of transformers.

Image Classification Knowledge Distillation

A Closer Look at Branch Classifiers of Multi-exit Architectures

no code implementations28 Apr 2022 Shaohui Lin, Bo Ji, Rongrong Ji, Angela Yao

Multi-exit architectures consist of a backbone and branch classifiers that offer shortened inference pathways to reduce the run-time of deep neural networks.

DCS-RISR: Dynamic Channel Splitting for Efficient Real-world Image Super-Resolution

no code implementations15 Dec 2022 Junbo Qiao, Shaohui Lin, Yunlun Zhang, Wei Li, Jie Hu, Gaoqi He, Changbo Wang, Lizhuang Ma

Real-world image super-resolution (RISR) has received increased focus for improving the quality of SR images under unknown complex degradation.

Image Super-Resolution SSIM

AttriCLIP: A Non-Incremental Learner for Incremental Knowledge Learning

no code implementations CVPR 2023 Runqi Wang, Xiaoyue Duan, Guoliang Kang, Jianzhuang Liu, Shaohui Lin, Songcen Xu, Jinhu Lv, Baochang Zhang

Text consists of a category name and a fixed number of learnable parameters which are selected from our designed attribute word bank and serve as attributes.

Attribute Continual Learning +1

SPD-DDPM: Denoising Diffusion Probabilistic Models in the Symmetric Positive Definite Space

1 code implementation13 Dec 2023 Yunchen Li, Zhou Yu, Gaoqi He, Yunhang Shen, Ke Li, Xing Sun, Shaohui Lin

On the other hand, the model unconditionally learns the probability distribution of the data $p(X)$ and generates samples that conform to this distribution.

Denoising Traffic Prediction

Weakly Supervised Open-Vocabulary Object Detection

no code implementations19 Dec 2023 Jianghang Lin, Yunhang Shen, Bingquan Wang, Shaohui Lin, Ke Li, Liujuan Cao

Despite weakly supervised object detection (WSOD) being a promising step toward evading strong instance-level annotations, its capability is confined to closed-set categories within a single training dataset.

Attribute Novel Concepts +6

Class-Imbalanced Semi-Supervised Learning for Large-Scale Point Cloud Semantic Segmentation via Decoupling Optimization

no code implementations13 Jan 2024 Mengtian Li, Shaohui Lin, Zihan Wang, Yunhang Shen, Baochang Zhang, Lizhuang Ma

Semi-supervised learning (SSL), thanks to the significant reduction of data annotation costs, has been an active research topic for large-scale 3D scene understanding.

Pseudo Label Representation Learning +2

Rethinking Centered Kernel Alignment in Knowledge Distillation

no code implementations22 Jan 2024 Zikai Zhou, Yunhang Shen, Shitong Shao, Linrui Gong, Shaohui Lin

Knowledge distillation has emerged as a highly effective method for bridging the representation discrepancy between large-scale models and lightweight models.

Image Classification Knowledge Distillation +2

Cannot find the paper you are looking for? You can Submit a new open access paper.