no code implementations • 22 Nov 2019 • Yiren Wang, Hongzhao Huang, Zhe Liu, Yutong Pang, Yongqiang Wang, ChengXiang Zhai, Fuchun Peng
Although n-gram language models (LMs) have been outperformed by the state-of-the-art neural LMs, they are still widely used in speech recognition due to its high efficiency in inference.
no code implementations • 24 Oct 2019 • Hongzhao Huang, Fuchun Peng
In particular, our experiments on a video speech recognition dataset show that we are able to achieve WERRs ranging from 6. 46% to 7. 17% while only with 5. 5% to 11. 9% parameter sizes of the well-known large GPT model [1], whose WERR with rescoring on the same dataset is 7. 58%.
no code implementations • 22 Oct 2019 • Yongqiang Wang, Abdel-rahman Mohamed, Duc Le, Chunxi Liu, Alex Xiao, Jay Mahadeokar, Hongzhao Huang, Andros Tjandra, Xiaohui Zhang, Frank Zhang, Christian Fuegen, Geoffrey Zweig, Michael L. Seltzer
We propose and evaluate transformer-based acoustic models (AMs) for hybrid speech recognition.
Ranked #28 on
Speech Recognition
on LibriSpeech test-other
(using extra training data)
no code implementations • 28 Apr 2015 • Hongzhao Huang, Larry Heck, Heng Ji
Entity Disambiguation aims to link mentions of ambiguous entities to a knowledge base (e. g., Wikipedia).