Search Results for author: Ming-Hui Wu

Found 4 papers, 0 papers with code

A Complementary Joint Training Approach Using Unpaired Speech and Text for Low-Resource Automatic Speech Recognition

no code implementations5 Apr 2022 Ye-Qian Du, Jie Zhang, Qiu-Shi Zhu, Li-Rong Dai, Ming-Hui Wu, Xin Fang, Zhou-Wang Yang

Unpaired data has shown to be beneficial for low-resource automatic speech recognition~(ASR), which can be involved in the design of hybrid models with multi-task training or language model dependent pre-training.

Automatic Speech Recognition speech-recognition

Learning Contextually Fused Audio-visual Representations for Audio-visual Speech Recognition

no code implementations15 Feb 2022 Zi-Qiang Zhang, Jie Zhang, Jian-Shu Zhang, Ming-Hui Wu, Xin Fang, Li-Rong Dai

The proposed approach explores both the complementarity of audio-visual modalities and long-term context dependency using a transformer-based fusion module and a flexible masking strategy.

Audio-Visual Speech Recognition Lipreading +4

XLST: Cross-lingual Self-training to Learn Multilingual Representation for Low Resource Speech Recognition

no code implementations15 Mar 2021 Zi-Qiang Zhang, Yan Song, Ming-Hui Wu, Xin Fang, Li-Rong Dai

In this paper, we propose a weakly supervised multilingual representation learning framework, called cross-lingual self-training (XLST).

Data Augmentation Representation Learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.