1 code implementation • 28 Sep 2022 • Xiangcheng Liu, Tianyi Wu, Guodong Guo
The learnable thresholds are optimized in budget-aware training to balance accuracy and complexity, performing the corresponding pruning configurations for different input instances.
Ranked #6 on Efficient ViTs on ImageNet-1K (With LV-ViT-S)
1 code implementation • 26 Apr 2022 • Hongyi Yao, Pu Li, Jian Cao, Xiangcheng Liu, Chenying Xie, Bingzhang Wang
We are the first to propose the more constrained but hardware-friendly Power-of-Two quantization scheme for low-bit PTQ specially and prove that it can achieve nearly the same accuracy as SOTA PTQ method.
no code implementations • 4 Oct 2021 • Yuan Zhang, Jian Cao, Ling Zhang, Xiangcheng Liu, Zhiyi Wang, Feng Ling, Weiqian Chen
Learning subtle representation about object parts plays a vital role in fine-grained visual recognition (FGVR) field.
Ranked #10 on Fine-Grained Image Classification on Stanford Dogs
Fine-Grained Image Classification Fine-Grained Visual Recognition
no code implementations • 14 Sep 2021 • Xiangcheng Liu, Jian Cao, Hongyi Yao, Wenyu Sun, Yuan Zhang
While previous pruning methods have mostly focused on identifying unimportant channels, channel pruning is considered as a special case of neural architecture search in recent years.
no code implementations • 2 Dec 2020 • Wenyu Sun, Jian Cao, Pengtao Xu, Xiangcheng Liu, Pu Li
We propose an efficient once-for-all budgeted pruning framework (OFARPruning) to find many compact network structures close to winner tickets in the early training stage considering the effect of input resolution during the pruning process.