Search Results for author: Nauman Dawalatabad

Found 7 papers, 1 papers with code

On Unsupervised Uncertainty-Driven Speech Pseudo-Label Filtering and Model Calibration

no code implementations14 Nov 2022 Nauman Dawalatabad, Sameer Khurana, Antoine Laurent, James Glass

Dropout-based Uncertainty-driven Self-Training (DUST) proceeds by first training a teacher model on source domain labeled data.

Pseudo Label Unsupervised Domain Adaptation

Multi-stage Progressive Compression of Conformer Transducer for On-device Speech Recognition

no code implementations1 Oct 2022 Jash Rathod, Nauman Dawalatabad, Shatrughan Singh, Dhananjaya Gowda

Knowledge distillation (KD) is a popular model compression approach that has shown to achieve smaller model size with relatively lesser degradation in the model performance.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

Two-Pass End-to-End ASR Model Compression

no code implementations8 Jan 2022 Nauman Dawalatabad, Tushar Vatsal, Ashutosh Gupta, Sungsoo Kim, Shatrughan Singh, Dhananjaya Gowda, Chanwoo Kim

With the use of popular transducer-based models, it has become possible to practically deploy streaming speech recognition models on small devices [1].

Knowledge Distillation Model Compression +3

Front-end Diarization for Percussion Separation in Taniavartanam of Carnatic Music Concerts

no code implementations4 Mar 2021 Nauman Dawalatabad, Jilt Sebastian, Jom Kuriakose, C. Chandra Sekhar, Shrikanth Narayanan, Hema A. Murthy

In this work, we address the problem of separating the percussive voices in the taniavartanam segments of Carnatic music.

Cannot find the paper you are looking for? You can Submit a new open access paper.