Search Results for author: Pegah Ghahrmani

Found 1 papers, 0 papers with code

Purely sequence-trained neural networks for ASR based on lattice-free MMI

no code implementations INTERSPEECH 2016 2016 Daniel Povey, Vijayaditya Peddinti, Daniel Galvez, Pegah Ghahrmani, Vimal Manohar, Xingyu Na, Yiming Wang, Sanjeev Khudanpur

Models trained with LFMMI provide a relative word error rate reduction of ∼11. 5%, over those trained with cross-entropy objective function, and ∼8%, over those trained with cross-entropy and sMBR objective functions.

Language Modelling Speech Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.