Search Results for author: Xingyu Na

Found 4 papers, 1 papers with code

A Treatise On FST Lattice Based MMI Training

no code implementations17 Oct 2022 Adnan Haider, Tim Ng, Zhen Huang, Xingyu Na, Antti Veikko Rosti

Maximum mutual information (MMI) has become one of the two de facto methods for sequence-level training of speech recognition acoustic models.

speech-recognition Speech Recognition

AISHELL-2: Transforming Mandarin ASR Research Into Industrial Scale

no code implementations31 Aug 2018 Jiayu Du, Xingyu Na, Xuechen Liu, Hui Bu

For research community, we hope that AISHELL-2 corpus can be a solid resource for topics like transfer learning and robust ASR.

Chinese Word Segmentation speech-recognition +2

Purely sequence-trained neural networks for ASR based on lattice-free MMI

no code implementations INTERSPEECH 2016 2016 Daniel Povey, Vijayaditya Peddinti, Daniel Galvez, Pegah Ghahrmani, Vimal Manohar, Xingyu Na, Yiming Wang, Sanjeev Khudanpur

Models trained with LFMMI provide a relative word error rate reduction of ∼11. 5%, over those trained with cross-entropy objective function, and ∼8%, over those trained with cross-entropy and sMBR objective functions.

Language Modelling Speech Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.