Search Results for author: Yuxia Wang

Found 18 papers, 4 papers with code

HI-CMLM: Improve CMLM with Hybrid Decoder Input

no code implementations INLG (ACL) 2021 Minghan Wang, Guo Jiaxin, Yuxia Wang, Yimeng Chen, Su Chang, Daimeng Wei, Min Zhang, Shimin Tao, Hao Yang

Mask-predict CMLM (Ghazvininejad et al., 2019) has achieved stunning performance among non-autoregressive NMT models, but we find that the mechanism of predicting all of the target words only depending on the hidden state of [MASK] is not effective and efficient in initial iterations of refinement, resulting in ungrammatical repetitions and slow convergence.

NMT Translation

Noisy Label Regularisation for Textual Regression

1 code implementation COLING 2022 Yuxia Wang, Timothy Baldwin, Karin Verspoor

Training with noisy labelled data is known to be detrimental to model performance, especially for high-capacity neural network models in low-resource domains.


Learning from Unlabelled Data for Clinical Semantic Textual Similarity

no code implementations EMNLP (ClinicalNLP) 2020 Yuxia Wang, Karin Verspoor, Timothy Baldwin

Domain pretraining followed by task fine-tuning has become the standard paradigm for NLP tasks, but requires in-domain labelled data for task fine-tuning.

Semantic Textual Similarity STS

Rethinking STS and NLI in Large Language Models

no code implementations16 Sep 2023 Yuxia Wang, Minghan Wang, Preslav Nakov

In this study, we aim to rethink STS and NLI in the era of large language models (LLMs).


Do-Not-Answer: A Dataset for Evaluating Safeguards in LLMs

1 code implementation25 Aug 2023 Yuxia Wang, Haonan Li, Xudong Han, Preslav Nakov, Timothy Baldwin

With the rapid evolution of large language models (LLMs), new and hard-to-predict harmful capabilities are emerging.

Collective Human Opinions in Semantic Textual Similarity

1 code implementation8 Aug 2023 Yuxia Wang, Shimin Tao, Ning Xie, Hao Yang, Timothy Baldwin, Karin Verspoor

Despite the subjective nature of semantic textual similarity (STS) and pervasive disagreements in STS annotation, existing benchmarks have used averaged human ratings as the gold standard.

Semantic Textual Similarity STS

Joint-training on Symbiosis Networks for Deep Nueral Machine Translation models

no code implementations22 Dec 2021 Zhengzhe Yu, Jiaxin Guo, Minghan Wang, Daimeng Wei, Hengchao Shang, Zongyao Li, Zhanglin Wu, Yuxia Wang, Yimeng Chen, Chang Su, Min Zhang, Lizhi Lei, Shimin Tao, Hao Yang

Deep encoders have been proven to be effective in improving neural machine translation (NMT) systems, but it reaches the upper bound of translation quality when the number of encoder layers exceeds 18.

Machine Translation NMT +1

Self-Distillation Mixup Training for Non-autoregressive Neural Machine Translation

no code implementations22 Dec 2021 Jiaxin Guo, Minghan Wang, Daimeng Wei, Hengchao Shang, Yuxia Wang, Zongyao Li, Zhengzhe Yu, Zhanglin Wu, Yimeng Chen, Chang Su, Min Zhang, Lizhi Lei, Shimin Tao, Hao Yang

An effective training strategy to improve the performance of AT models is Self-Distillation Mixup (SDM) Training, which pre-trains a model on raw data, generates distilled data by the pre-trained model itself and finally re-trains a model on the combination of raw data and distilled data.

Knowledge Distillation Machine Translation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.