no code implementations • 1 Oct 2022 • Jash Rathod, Nauman Dawalatabad, Shatrughan Singh, Dhananjaya Gowda
Knowledge distillation (KD) is a popular model compression approach that has shown to achieve smaller model size with relatively lesser degradation in the model performance.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +3
no code implementations • 8 Jan 2022 • Nauman Dawalatabad, Tushar Vatsal, Ashutosh Gupta, Sungsoo Kim, Shatrughan Singh, Dhananjaya Gowda, Chanwoo Kim
With the use of popular transducer-based models, it has become possible to practically deploy streaming speech recognition models on small devices [1].
no code implementations • 22 Dec 2019 • Chanwoo Kim, Sungsoo Kim, Kwangyoun Kim, Mehul Kumar, Jiyeon Kim, Kyungmin Lee, Changwoo Han, Abhinav Garg, Eunhyang Kim, Minkyoo Shin, Shatrughan Singh, Larry Heck, Dhananjaya Gowda
Our end-to-end speech recognition system built using this training infrastructure showed a 2. 44 % WER on test-clean of the LibriSpeech test set after applying shallow fusion with a Transformer language model (LM).