Search Results for author: Shatrughan Singh

Found 3 papers, 0 papers with code

Multi-stage Progressive Compression of Conformer Transducer for On-device Speech Recognition

no code implementations1 Oct 2022 Jash Rathod, Nauman Dawalatabad, Shatrughan Singh, Dhananjaya Gowda

Knowledge distillation (KD) is a popular model compression approach that has shown to achieve smaller model size with relatively lesser degradation in the model performance.

Automatic Speech Recognition Knowledge Distillation +2

Two-Pass End-to-End ASR Model Compression

no code implementations8 Jan 2022 Nauman Dawalatabad, Tushar Vatsal, Ashutosh Gupta, Sungsoo Kim, Shatrughan Singh, Dhananjaya Gowda, Chanwoo Kim

With the use of popular transducer-based models, it has become possible to practically deploy streaming speech recognition models on small devices [1].

Knowledge Distillation Model Compression +2

end-to-end training of a large vocabulary end-to-end speech recognition system

no code implementations22 Dec 2019 Chanwoo Kim, Sungsoo Kim, Kwangyoun Kim, Mehul Kumar, Jiyeon Kim, Kyungmin Lee, Changwoo Han, Abhinav Garg, Eunhyang Kim, Minkyoo Shin, Shatrughan Singh, Larry Heck, Dhananjaya Gowda

Our end-to-end speech recognition system built using this training infrastructure showed a 2. 44 % WER on test-clean of the LibriSpeech test set after applying shallow fusion with a Transformer language model (LM).

Data Augmentation speech-recognition +1

Cannot find the paper you are looking for? You can Submit a new open access paper.