Paper

Future Word Contexts in Neural Network Language Models

Recently, bidirectional recurrent network language models (bi-RNNLMs) have been shown to outperform standard, unidirectional, recurrent neural network language models (uni-RNNLMs) on a range of speech recognition tasks. This indicates that future word context information beyond the word history can be useful... (read more)

Results in Papers With Code
(↓ scroll down to see all results)