no code implementations • Findings (ACL) 2022 • Scott Novotney, Sreeparna Mukherjee, Zeeshan Ahmed, Andreas Stolcke
Training the model initially with proxy context retains 67% of the perplexity gain after adapting to real context.
1 code implementation • Findings (ACL) 2021 • Richard Diehl Martinez, Scott Novotney, Ivan Bulyko, Ariya Rastrow, Andreas Stolcke, Ankur Gandhe
When applied to a large de-identified dataset of utterances collected by a popular voice assistant platform, our method reduces perplexity by 7. 0% relative over a standard LM that does not incorporate contextual information.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +2
no code implementations • 30 Nov 2020 • Vijay Ravi, Yile Gu, Ankur Gandhe, Ariya Rastrow, Linda Liu, Denis Filimonov, Scott Novotney, Ivan Bulyko
We show that this simple method can improve performance on rare words by 3. 7% WER relative without degradation on general test set, and the improvement from USF is additive to any additional language model based rescoring.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +2