Search Results for author: Daimeng Wei

Found 24 papers, 1 papers with code

HI-CMLM: Improve CMLM with Hybrid Decoder Input

no code implementations INLG (ACL) 2021 Minghan Wang, Guo Jiaxin, Yuxia Wang, Yimeng Chen, Su Chang, Daimeng Wei, Min Zhang, Shimin Tao, Hao Yang

Mask-predict CMLM (Ghazvininejad et al., 2019) has achieved stunning performance among non-autoregressive NMT models, but we find that the mechanism of predicting all of the target words only depending on the hidden state of [MASK] is not effective and efficient in initial iterations of refinement, resulting in ungrammatical repetitions and slow convergence.

NMT Translation

HW-TSC’s Submissions to the WMT21 Biomedical Translation Task

no code implementations WMT (EMNLP) 2021 Hao Yang, Zhanglin Wu, Zhengzhe Yu, Xiaoyu Chen, Daimeng Wei, Zongyao Li, Hengchao Shang, Minghan Wang, Jiaxin Guo, Lizhi Lei, Chuanfei Xu, Min Zhang, Ying Qin

This paper describes the submission of Huawei Translation Service Center (HW-TSC) to WMT21 biomedical translation task in two language pairs: Chinese↔English and German↔English (Our registered team name is HuaweiTSC).

Translation

Cross-Domain Audio Deepfake Detection: Dataset and Analysis

no code implementations7 Apr 2024 Yuang Li, Min Zhang, Mengxin Ren, Miaomiao Ma, Daimeng Wei, Hao Yang

Audio deepfake detection (ADD) is essential for preventing the misuse of synthetic voices that may infringe on personal rights and privacy.

DeepFake Detection Face Swapping

A Novel Paradigm Boosting Translation Capabilities of Large Language Models

no code implementations18 Mar 2024 Jiaxin Guo, Hao Yang, Zongyao Li, Daimeng Wei, Hengchao Shang, Xiaoyu Chen

Experimental results conducted using the Llama2 model, particularly on Chinese-Llama2 after monolingual augmentation, demonstrate the improved translation capabilities of LLMs.

Machine Translation Translation

DeMPT: Decoding-enhanced Multi-phase Prompt Tuning for Making LLMs Be Better Context-aware Translators

no code implementations23 Feb 2024 Xinglin Lyu, Junhui Li, Yanqing Zhao, Daimeng Wei, Shimin Tao, Hao Yang, Min Zhang

In this paper, we propose an alternative adaptation approach, named Decoding-enhanced Multi-phase Prompt Tuning (DeMPT), to make LLMs discriminately model and utilize the inter- and intra-sentence context and more effectively adapt LLMs to context-aware NMT.

Machine Translation NMT +1

Text Style Transfer Back-Translation

1 code implementation2 Jun 2023 Daimeng Wei, Zhanglin Wu, Hengchao Shang, Zongyao Li, Minghan Wang, Jiaxin Guo, Xiaoyu Chen, Zhengzhe Yu, Hao Yang

To address this issue, we propose Text Style Transfer Back Translation (TST BT), which uses a style transfer model to modify the source side of BT data.

Data Augmentation Domain Adaptation +4

Self-Distillation Mixup Training for Non-autoregressive Neural Machine Translation

no code implementations22 Dec 2021 Jiaxin Guo, Minghan Wang, Daimeng Wei, Hengchao Shang, Yuxia Wang, Zongyao Li, Zhengzhe Yu, Zhanglin Wu, Yimeng Chen, Chang Su, Min Zhang, Lizhi Lei, Shimin Tao, Hao Yang

An effective training strategy to improve the performance of AT models is Self-Distillation Mixup (SDM) Training, which pre-trains a model on raw data, generates distilled data by the pre-trained model itself and finally re-trains a model on the combination of raw data and distilled data.

Knowledge Distillation Machine Translation +1

Joint-training on Symbiosis Networks for Deep Nueral Machine Translation models

no code implementations22 Dec 2021 Zhengzhe Yu, Jiaxin Guo, Minghan Wang, Daimeng Wei, Hengchao Shang, Zongyao Li, Zhanglin Wu, Yuxia Wang, Yimeng Chen, Chang Su, Min Zhang, Lizhi Lei, Shimin Tao, Hao Yang

Deep encoders have been proven to be effective in improving neural machine translation (NMT) systems, but it reaches the upper bound of translation quality when the number of encoder layers exceeds 18.

Machine Translation NMT +1

Cannot find the paper you are looking for? You can Submit a new open access paper.