First-Pass Large Vocabulary Continuous Speech Recognition using Bi-Directional Recurrent DNNs

12 Aug 2014Awni Y. Hannun • Andrew L. Maas • Daniel Jurafsky • Andrew Y. Ng

Recent work demonstrated the feasibility of discarding the HMM sequence modeling framework by directly predicting transcript text from audio. First, we demonstrate that a straightforward recurrent neural network architecture can achieve a high level of accuracy. This approach to decoding enables first-pass speech recognition with a language model, completely unaided by the cumbersome infrastructure of HMM-based systems.

Full paper

Evaluation


No evaluation results yet. Help compare this paper to other papers by submitting the tasks and evaluation metrics from the paper.