INTERSPEECH 2016 2016

Purely sequence-trained neural networks for ASR based on lattice-free MMI

INTERSPEECH 2016 2016 kaldi-asr/kaldi

Models trained with LFMMI provide a relative word error rate reduction of ∼11. 5%, over those trained with cross-entropy objective function, and ∼8%, over those trained with cross-entropy and sMBR objective functions.

LANGUAGE MODELLING LARGE VOCABULARY CONTINUOUS SPEECH RECOGNITION SPEECH RECOGNITION