Search Results for author: Raden Mu'az Mun'im

Found 2 papers, 0 papers with code

MaskConvNet: Training Efficient ConvNets from Scratch via Budget-constrained Filter Pruning

no code implementations ICLR 2020 Raden Mu'az Mun'im, Jie Lin, Vijay Chandrasekhar, Koichi Shinoda

(4) Fast, it is observed that the number of training epochs required by MaskConvNet is close to training a baseline without pruning.

Network Pruning

Sequence-Level Knowledge Distillation for Model Compression of Attention-based Sequence-to-Sequence Speech Recognition

no code implementations12 Nov 2018 Raden Mu'az Mun'im, Nakamasa Inoue, Koichi Shinoda

We investigate the feasibility of sequence-level knowledge distillation of Sequence-to-Sequence (Seq2Seq) models for Large Vocabulary Continuous Speech Recognition (LVSCR).

Knowledge Distillation Model Compression +2

Cannot find the paper you are looking for? You can Submit a new open access paper.