Virtual assistants make use of automatic speech recognition (ASR) to help users answer entity-centric queries.
High-quality automatic speech recognition (ASR) is essential for virtual assistants (VAs) to work well.
We customize entropy pruning by allowing for a keep list of infrequent n-grams that require a more relaxed pruning threshold, and propose three methods to construct the keep list.
We focus on improving the effectiveness of a Virtual Assistant (VA) in recognizing emerging entities in spoken queries.
In this work, we uncover a theoretical connection between two language model interpolation techniques, count merging and Bayesian interpolation.