Search Results for author: Tailin Liang

Found 2 papers, 1 papers with code

Pruning and Quantization for Deep Neural Network Acceleration: A Survey

no code implementations24 Jan 2021 Tailin Liang, John Glossner, Lei Wang, Shaobo Shi, Xiaotong Zhang

We discuss trade-offs in element-wise, channel-wise, shape-wise, filter-wise, layer-wise and even network-wise pruning.

Quantization

Dynamic Runtime Feature Map Pruning

1 code implementation24 Dec 2018 Tailin Liang, Lei Wang, Shaobo Shi, John Glossner

Of the networks considered, those using ReLU (AlexNet, SqueezeNet, VGG16) contain a high percentage of 0-valued parameters and can be statically pruned.

Cannot find the paper you are looking for? You can Submit a new open access paper.