Search Results for author: Weihao Lin

Found 4 papers, 1 papers with code

Enhanced Sparsification via Stimulative Training

no code implementations11 Mar 2024 Shengji Tang, Weihao Lin, Hancheng Ye, Peng Ye, Chong Yu, Baopu Li, Tao Chen

To alleviate this issue, we first study and reveal the relative sparsity effect in emerging stimulative training and then propose a structured pruning framework, named STP, based on an enhanced sparsification paradigm which maintains the magnitude of dropped weights and enhances the expressivity of kept weights by self-distillation.

Knowledge Distillation Model Compression

Efficient Architecture Search via Bi-level Data Pruning

no code implementations21 Dec 2023 Chongjun Tu, Peng Ye, Weihao Lin, Hancheng Ye, Chong Yu, Tao Chen, Baopu Li, Wanli Ouyang

Improving the efficiency of Neural Architecture Search (NAS) is a challenging but significant task that has received much attention.

Neural Architecture Search

SpVOS: Efficient Video Object Segmentation with Triple Sparse Convolution

no code implementations23 Oct 2023 Weihao Lin, Tao Chen, Chong Yu

Therefore, we propose a sparse baseline of VOS named SpVOS in this work, which develops a novel triple sparse convolution to reduce the computation costs of the overall VOS framework.

Object Semantic Segmentation +2

Boosting Residual Networks with Group Knowledge

1 code implementation26 Aug 2023 Shengji Tang, Peng Ye, Baopu Li, Weihao Lin, Tao Chen, Tong He, Chong Yu, Wanli Ouyang

Specifically, we implicitly divide all subnets into hierarchical groups by subnet-in-subnet sampling, aggregate the knowledge of different subnets in each group during training, and exploit upper-level group knowledge to supervise lower-level subnet groups.

Knowledge Distillation

Cannot find the paper you are looking for? You can Submit a new open access paper.