Search Results for author: Peihao Wu

Found 5 papers, 1 papers with code

CIF-PT: Bridging Speech and Text Representations for Spoken Language Understanding via Continuous Integrate-and-Fire Pre-Training

no code implementations27 May 2023 Linhao Dong, Zhecheng An, Peihao Wu, Jun Zhang, Lu Lu, Zejun Ma

We also observe the cross-modal representation extracted by CIF-PT obtains better performance than other neural interfaces for the tasks of SLU, including the dominant speech representation learned from self-supervised pre-training.

intent-classification Intent Classification +5

Enhancing Large Language Model with Self-Controlled Memory Framework

1 code implementation26 Apr 2023 Bing Wang, Xinnian Liang, Jian Yang, Hui Huang, Shuangzhi Wu, Peihao Wu, Lu Lu, Zejun Ma, Zhoujun Li

Large Language Models (LLMs) are constrained by their inability to process lengthy inputs, resulting in the loss of critical historical information.

Book summarization Document Summarization +5

Internal Language Model Estimation based Adaptive Language Model Fusion for Domain Adaptation

no code implementations2 Nov 2022 Rao Ma, Xiaobo Wu, Jin Qiu, Yanan Qin, HaiHua Xu, Peihao Wu, Zejun Ma

The proposed method can achieve significantly better performance on the target test sets while it gets minimal performance degradation on the general test set, compared with both shallow and ILME-based LM fusion methods.

Domain Adaptation Language Modelling

Improving Contextual Representation with Gloss Regularized Pre-training

no code implementations Findings (NAACL) 2022 Yu Lin, Zhecheng An, Peihao Wu, Zejun Ma

To tackle this issue, we propose an auxiliary gloss regularizer module to BERT pre-training (GR-BERT), to enhance word semantic similarity.

Semantic Similarity Semantic Textual Similarity +3

Deep LSTM for Large Vocabulary Continuous Speech Recognition

no code implementations21 Mar 2017 Xu Tian, Jun Zhang, Zejun Ma, Yi He, Juan Wei, Peihao Wu, Wenchang Situ, Shuai Li, Yang Zhang

It is a competitive framework that LSTM models of more than 7 layers are successfully trained on Shenma voice search data in Mandarin and they outperform the deep LSTM models trained by conventional approach.

speech-recognition Speech Recognition +1

Cannot find the paper you are looking for? You can Submit a new open access paper.