ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression

ICCV 2017 Jian-Hao LuoJianxin WuWeiyao Lin

We propose an efficient and unified framework, namely ThiNet, to simultaneously accelerate and compress CNN models in both training and inference stages. We focus on the filter level pruning, i.e., the whole filter would be discarded if it is less important... (read more)

PDF Abstract

Evaluation Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.