Lottery Tickets can have Structural Sparsity

29 Sep 2021  ·  Tianlong Chen, Xuxi Chen, Xiaolong Ma, Yanzhi Wang, Zhangyang Wang ·

The lottery ticket hypothesis (LTH) has shown that dense models contain highly sparse subnetworks (i.e., $\textit{winning tickets}$) that can be trained in isolation to match full accuracy. Despite many exciting efforts being made, there is one "commonsense" seldomly challenged: a winning ticket is found by iterative magnitude pruning (IMP) and hence the resultant pruned subnetworks have only unstructured sparsity. That gap limits the appeal of winning tickets in practice, since the highly irregular sparse patterns are challenging to accelerate on hardware. Meanwhile, directly substituting structured pruning for unstructured pruning in IMP damages performance more severely and is usually unable to locate winning tickets. In this paper, we demonstrate $\textbf{the first positive result}$ that a structurally sparse winning ticket can be effectively found in general. The core idea is to append ``post-processing techniques" after each round of (unstructured) IMP, to enforce the formation of structural sparsity. Specifically, we first ``re-fill" pruned elements back in some channels deemed to be important, and then ``re-group" non-zero elements to create flexible group-wise structural patterns. Both our identified channel- and group-wise structural subnetworks win the lottery, with substantial inference speedups readily supported by practical hardware. Extensive experiments, conducted on diverse datasets across multiple network backbones, consistently validate our proposal, showing that the hardware acceleration roadblock of LTH is now removed. Specifically, the structural winning tickets obtain up to $\{64.93\%, 64.84\%, 64.84\%\}$ running time savings at $\{36\%\sim 80\%, 74\%, 58\%\}$ sparsity on CIFAR, Tiny-ImageNet, ImageNet, while maintaining comparable accuracy. All the codes and pre-trained models will be publicly released.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods