no code implementations • 15 Dec 2022 • Junbo Qiao, Shaohui Lin, Yunlun Zhang, Wei Li, Jie Hu, Gaoqi He, Changbo Wang, Lizhuang Ma
Real-world image super-resolution (RISR) has received increased focus for improving the quality of SR images under unknown complex degradation.
2 code implementations • 22 Jun 2022 • Peixian Chen, Kekai Sheng, Mengdan Zhang, Mingbao Lin, Yunhang Shen, Shaohui Lin, Bo Ren, Ke Li
Open-vocabulary object detection (OVD) aims to scale up vocabulary size to detect objects of novel categories beyond the training vocabulary.
Ranked #2 on
Open Vocabulary Object Detection
on LVIS v1.0
1 code implementation • 2 Jun 2022 • Nan Wang, Shaohui Lin, Xiaoxiao Li, Ke Li, Yunhang Shen, Yue Gao, Lizhuang Ma
U-Nets have achieved tremendous success in medical image segmentation.
no code implementations • 28 Apr 2022 • Shaohui Lin, Bo Ji, Rongrong Ji, Angela Yao
Multi-exit architectures consist of a backbone and branch classifiers that offer shortened inference pathways to reduce the run-time of deep neural networks.
no code implementations • CVPR 2022 • Mengtian Li, Yuan Xie, Yunhang Shen, Bo Ke, Ruizhi Qiao, Bo Ren, Shaohui Lin, Lizhuang Ma
To address the huge labeling cost in large-scale point cloud semantic segmentation, we propose a novel hybrid contrastive regularization (HybridCR) framework in weakly-supervised setting, which obtains competitive performance compared to its fully-supervised counterpart.
no code implementations • 29 Sep 2021 • Haiyan Wu, Yuting Gao, Ke Li, Yinqi Zhang, Shaohui Lin, Yuan Xie, Xing Sun
These findings motivate us to introduce an self-supervised teaching assistant (SSTA) besides the commonly used supervised teacher to improve the performance of transformers.
no code implementations • 18 Jun 2021 • Chengwei Chen, Yuan Xie, Shaohui Lin, Ruizhi Qiao, Jian Zhou, Xin Tan, Yi Zhang, Lizhuang Ma
Moreover, our model is more stable for training in a non-adversarial manner, compared to other adversarial based novelty detection methods.
6 code implementations • 25 May 2021 • Yanbo Wang, Shaohui Lin, Yanyun Qu, Haiyan Wu, Zhizhong Zhang, Yuan Xie, Angela Yao
Convolutional neural networks (CNNs) are highly successful for super-resolution (SR) but often require sophisticated architectures with heavy memory cost and computational overhead, significantly restricts their practical deployments on resource-limited devices.
1 code implementation • CVPR 2021 • Yuchao Li, Shaohui Lin, Jianzhuang Liu, Qixiang Ye, Mengdi Wang, Fei Chao, Fan Yang, Jincheng Ma, Qi Tian, Rongrong Ji
Channel pruning and tensor decomposition have received extensive attention in convolutional neural network compression.
1 code implementation • 19 Apr 2021 • Yuting Gao, Jia-Xin Zhuang, Shaohui Lin, Hao Cheng, Xing Sun, Ke Li, Chunhua Shen
Specifically, we find the final embedding obtained by the mainstream SSL methods contains the most fruitful information, and propose to distill the final embedding to maximally transmit a teacher's knowledge to a lightweight model by constraining the last embedding of the student to be consistent with that of the teacher.
3 code implementations • CVPR 2021 • Haiyan Wu, Yanyun Qu, Shaohui Lin, Jian Zhou, Ruizhi Qiao, Zhizhong Zhang, Yuan Xie, Lizhuang Ma
In this paper, we propose a novel contrastive regularization (CR) built upon contrastive learning to exploit both the information of hazy images and clear images as negative and positive samples, respectively.
Ranked #5 on
Image Dehazing
on RS-Haze
2 code implementations • CVPR 2021 • Xudong Tian, Zhizhong Zhang, Shaohui Lin, Yanyun Qu, Yuan Xie, Lizhuang Ma
The Information Bottleneck (IB) provides an information theoretic principle for representation learning, by retaining all information relevant for predicting label while minimizing the redundancy.
Cross-Modality Person Re-identification
Cross-Modal Person Re-Identification
+3
no code implementations • Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence 2021 • Xuncheng Liu, Xudong Tian, Shaohui Lin, Yanyun Qu, Lizhuang Ma, Wang Yuan, Zhizhong Zhang, Yuan Xie
In this paper, we present a novel purified memory mechanism that simulates the recognition process of human beings.
1 code implementation • ECCV 2020 • Huixia Li, Chenqian Yan, Shaohui Lin, Xiawu Zheng, Yuchao Li, Baochang Zhang, Fan Yang, Rongrong Ji
Specifically, most state-of-the-art SR models without batch normalization have a large dynamic quantization range, which also serves as another cause of performance drop.
1 code implementation • 20 Apr 2020 • Moritz Wolter, Shaohui Lin, Angela Yao
Linear layers still occupy a significant portion of the parameters in recurrent neural networks (RNNs).
1 code implementation • 28 Sep 2019 • Jiao Xie, Shaohui Lin, Yichen Zhang, Linkai Luo
The large memory and computation consumption in convolutional neural networks (CNNs) has been one of the main barriers for deploying them on resource-limited systems.
no code implementations • ECCV 2020 • Yuchao Li, Rongrong Ji, Shaohui Lin, Baochang Zhang, Chenqian Yan, Yongjian Wu, Feiyue Huang, Ling Shao
More specifically, we introduce a novel architecture controlling module in each layer to encode the network architecture by a vector.
1 code implementation • CVPR 2019 • Shaohui Lin, Rongrong Ji, Chenqian Yan, Baochang Zhang, Liujuan Cao, Qixiang Ye, Feiyue Huang, David Doermann
In this paper, we propose an effective structured pruning approach that jointly prunes filters as well as other structures in an end-to-end manner.
1 code implementation • 23 Jan 2019 • Shaohui Lin, Rongrong Ji, Yuchao Li, Cheng Deng, Xuelong. Li
In this paper, we propose a novel filter pruning scheme, termed structured sparsity regularization (SSR), to simultaneously speedup the computation and reduce the memory overhead of CNNs, which can be well supported by various off-the-shelf deep learning libraries.
1 code implementation • CVPR 2019 • Yuchao Li, Shaohui Lin, Baochang Zhang, Jianzhuang Liu, David Doermann, Yongjian Wu, Feiyue Huang, Rongrong Ji
The relationship between the input feature maps and 2D kernels is revealed in a theoretical framework, based on which a kernel sparsity and entropy (KSE) indicator is proposed to quantitate the feature map importance in a feature-agnostic manner to guide model compression.