Search Results for author: Minghan Wang

Found 35 papers, 1 papers with code

HW-TSC at SemEval-2022 Task 3: A Unified Approach Fine-tuned on Multilingual Pretrained Model for PreTENS

no code implementations SemEval (NAACL) 2022 Yinglu Li, Min Zhang, Xiaosong Qiao, Minghan Wang

In order to verify whether our model could also perform better in subtask 2 (the regression subtask), the ranking score is transformed into classification labels by an up-sampling strategy.

Binary Classification TAG

HI-CMLM: Improve CMLM with Hybrid Decoder Input

no code implementations INLG (ACL) 2021 Minghan Wang, Guo Jiaxin, Yuxia Wang, Yimeng Chen, Su Chang, Daimeng Wei, Min Zhang, Shimin Tao, Hao Yang

Mask-predict CMLM (Ghazvininejad et al., 2019) has achieved stunning performance among non-autoregressive NMT models, but we find that the mechanism of predicting all of the target words only depending on the hidden state of [MASK] is not effective and efficient in initial iterations of refinement, resulting in ungrammatical repetitions and slow convergence.

NMT Translation

Make the Blind Translator See The World: A Novel Transfer Learning Solution for Multimodal Machine Translation

no code implementations MTSummit 2021 Minghan Wang, Jiaxin Guo, Yimeng Chen, Chang Su, Min Zhang, Shimin Tao, Hao Yang

Based on large-scale pretrained networks and the liability to be easily overfitting with limited labelled training data of multimodal translation (MMT) is a critical issue in MMT.

Multimodal Machine Translation NMT +2

HW-TSC’s Submissions to the WMT21 Biomedical Translation Task

no code implementations WMT (EMNLP) 2021 Hao Yang, Zhanglin Wu, Zhengzhe Yu, Xiaoyu Chen, Daimeng Wei, Zongyao Li, Hengchao Shang, Minghan Wang, Jiaxin Guo, Lizhi Lei, Chuanfei Xu, Min Zhang, Ying Qin

This paper describes the submission of Huawei Translation Service Center (HW-TSC) to WMT21 biomedical translation task in two language pairs: Chinese↔English and German↔English (Our registered team name is HuaweiTSC).

Translation

Efficient Transfer Learning for Quality Estimation with Bottleneck Adapter Layer

no code implementations EAMT 2020 Hao Yang, Minghan Wang, Ning Xie, Ying Qin, Yao Deng

Compared with the commonly used NuQE baseline, BAL-QE achieves 47% (En-Ru) and 75% (En-De) of performance promotions.

NMT Transfer Learning

Conversational SimulMT: Efficient Simultaneous Translation with Large Language Models

no code implementations16 Feb 2024 Minghan Wang, Thuy-Trang Vu, Ehsan Shareghi, Gholamreza Haffari

Simultaneous machine translation (SimulMT) presents a challenging trade-off between translation quality and latency.

Machine Translation Translation

Factuality of Large Language Models in the Year 2024

no code implementations4 Feb 2024 Yuxia Wang, Minghan Wang, Muhammad Arslan Manzoor, Fei Liu, Georgi Georgiev, Rocktim Jyoti Das, Preslav Nakov

Large language models (LLMs), especially when instruction-tuned for chat, have become part of our daily lives, freeing people from the process of searching, extracting, and integrating information from multiple sources by offering a straightforward answer to a variety of questions in a single place.

Text Generation

Rethinking STS and NLI in Large Language Models

no code implementations16 Sep 2023 Yuxia Wang, Minghan Wang, Preslav Nakov

Recent years have seen the rise of large language models (LLMs), where practitioners use task-specific prompts; this was shown to be effective for a variety of tasks.

Natural Language Inference Semantic Textual Similarity +1

Text Style Transfer Back-Translation

1 code implementation2 Jun 2023 Daimeng Wei, Zhanglin Wu, Hengchao Shang, Zongyao Li, Minghan Wang, Jiaxin Guo, Xiaoyu Chen, Zhengzhe Yu, Hao Yang

To address this issue, we propose Text Style Transfer Back Translation (TST BT), which uses a style transfer model to modify the source side of BT data.

Data Augmentation Domain Adaptation +4

Joint-training on Symbiosis Networks for Deep Nueral Machine Translation models

no code implementations22 Dec 2021 Zhengzhe Yu, Jiaxin Guo, Minghan Wang, Daimeng Wei, Hengchao Shang, Zongyao Li, Zhanglin Wu, Yuxia Wang, Yimeng Chen, Chang Su, Min Zhang, Lizhi Lei, Shimin Tao, Hao Yang

Deep encoders have been proven to be effective in improving neural machine translation (NMT) systems, but it reaches the upper bound of translation quality when the number of encoder layers exceeds 18.

Machine Translation NMT +1

Self-Distillation Mixup Training for Non-autoregressive Neural Machine Translation

no code implementations22 Dec 2021 Jiaxin Guo, Minghan Wang, Daimeng Wei, Hengchao Shang, Yuxia Wang, Zongyao Li, Zhengzhe Yu, Zhanglin Wu, Yimeng Chen, Chang Su, Min Zhang, Lizhi Lei, Shimin Tao, Hao Yang

An effective training strategy to improve the performance of AT models is Self-Distillation Mixup (SDM) Training, which pre-trains a model on raw data, generates distilled data by the pre-trained model itself and finally re-trains a model on the combination of raw data and distilled data.

Knowledge Distillation Machine Translation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.