Search Results for author: Shimin Tao

Found 30 papers, 7 papers with code

Clustering and Ranking: Diversity-preserved Instruction Selection through Expert-aligned Quality Estimation

1 code implementation28 Feb 2024 Yuan Ge, Yilun Liu, Chi Hu, Weibin Meng, Shimin Tao, Xiaofeng Zhao, Hongxia Ma, Li Zhang, Hao Yang, Tong Xiao

The second step involves preserving dataset diversity through a clustering process. In our experiment, CaR selected a subset containing only 1. 96% of Alpaca's IT data, yet the underlying AlpaCaR model trained on this subset outperforms Alpaca by an average of 32. 1% in GPT-4 evaluations.

Clustering

Interpretable Online Log Analysis Using Large Language Models with Prompt Strategies

1 code implementation15 Aug 2023 Yilun Liu, Shimin Tao, Weibin Meng, Jingyu Wang, Wenbing Ma, Yanqing Zhao, Yuhang Chen, Hao Yang, Yanfei Jiang, Xun Chen

LogPrompt employs large language models (LLMs) to perform online log analysis tasks via a suite of advanced prompt strategies tailored for log tasks, which enhances LLMs' performance by up to 380. 7% compared with simple prompts.

Anomaly Detection Log Parsing +1

Neighbors Are Not Strangers: Improving Non-Autoregressive Translation under Low-Frequency Lexical Constraints

1 code implementation NAACL 2022 Chun Zeng, Jiangjie Chen, Tianyi Zhuang, Rui Xu, Hao Yang, Ying Qin, Shimin Tao, Yanghua Xiao

To this end, we propose a plug-in algorithm for this line of work, i. e., Aligned Constrained Training (ACT), which alleviates this problem by familiarizing the model with the source-side context of the constraints.

Translation

Translate Meanings, Not Just Words: IdiomKB's Role in Optimizing Idiomatic Translation with Language Models

1 code implementation26 Aug 2023 Shuang Li, Jiangjie Chen, Siyu Yuan, Xinyi Wu, Hao Yang, Shimin Tao, Yanghua Xiao

To translate well, machine translation (MT) systems and general-purposed language models (LMs) need a deep understanding of both source and target languages and cultures.

Machine Translation Translation

Collective Human Opinions in Semantic Textual Similarity

1 code implementation8 Aug 2023 Yuxia Wang, Shimin Tao, Ning Xie, Hao Yang, Timothy Baldwin, Karin Verspoor

Despite the subjective nature of semantic textual similarity (STS) and pervasive disagreements in STS annotation, existing benchmarks have used averaged human ratings as the gold standard.

Semantic Textual Similarity Sentence +1

HI-CMLM: Improve CMLM with Hybrid Decoder Input

no code implementations INLG (ACL) 2021 Minghan Wang, Guo Jiaxin, Yuxia Wang, Yimeng Chen, Su Chang, Daimeng Wei, Min Zhang, Shimin Tao, Hao Yang

Mask-predict CMLM (Ghazvininejad et al., 2019) has achieved stunning performance among non-autoregressive NMT models, but we find that the mechanism of predicting all of the target words only depending on the hidden state of [MASK] is not effective and efficient in initial iterations of refinement, resulting in ungrammatical repetitions and slow convergence.

NMT Translation

Make the Blind Translator See The World: A Novel Transfer Learning Solution for Multimodal Machine Translation

no code implementations MTSummit 2021 Minghan Wang, Jiaxin Guo, Yimeng Chen, Chang Su, Min Zhang, Shimin Tao, Hao Yang

Based on large-scale pretrained networks and the liability to be easily overfitting with limited labelled training data of multimodal translation (MMT) is a critical issue in MMT.

Multimodal Machine Translation NMT +2

Joint-training on Symbiosis Networks for Deep Nueral Machine Translation models

no code implementations22 Dec 2021 Zhengzhe Yu, Jiaxin Guo, Minghan Wang, Daimeng Wei, Hengchao Shang, Zongyao Li, Zhanglin Wu, Yuxia Wang, Yimeng Chen, Chang Su, Min Zhang, Lizhi Lei, Shimin Tao, Hao Yang

Deep encoders have been proven to be effective in improving neural machine translation (NMT) systems, but it reaches the upper bound of translation quality when the number of encoder layers exceeds 18.

Machine Translation NMT +1

Self-Distillation Mixup Training for Non-autoregressive Neural Machine Translation

no code implementations22 Dec 2021 Jiaxin Guo, Minghan Wang, Daimeng Wei, Hengchao Shang, Yuxia Wang, Zongyao Li, Zhengzhe Yu, Zhanglin Wu, Yimeng Chen, Chang Su, Min Zhang, Lizhi Lei, Shimin Tao, Hao Yang

An effective training strategy to improve the performance of AT models is Self-Distillation Mixup (SDM) Training, which pre-trains a model on raw data, generates distilled data by the pre-trained model itself and finally re-trains a model on the combination of raw data and distilled data.

Knowledge Distillation Machine Translation +1

HwTscSU’s Submissions on WAT 2022 Shared Task

no code implementations WAT 2022 Yilun Liu, Zhen Zhang, Shimin Tao, Junhui Li, Hao Yang

In this paper we describe our submission to the shared tasks of the 9th Workshop on Asian Translation (WAT 2022) on NICT–SAP under the team name ”HwTscSU”.

Domain Adaptation NMT +1

P-Transformer: Towards Better Document-to-Document Neural Machine Translation

no code implementations12 Dec 2022 Yachao Li, Junhui Li, Jing Jiang, Shimin Tao, Hao Yang, Min Zhang

To alleviate this problem, we propose a position-aware Transformer (P-Transformer) to enhance both the absolute and relative position information in both self-attention and cross-attention.

Machine Translation NMT +3

DeMPT: Decoding-enhanced Multi-phase Prompt Tuning for Making LLMs Be Better Context-aware Translators

no code implementations23 Feb 2024 Xinglin Lyu, Junhui Li, Yanqing Zhao, Daimeng Wei, Shimin Tao, Hao Yang, Min Zhang

In this paper, we propose an alternative adaptation approach, named Decoding-enhanced Multi-phase Prompt Tuning (DeMPT), to make LLMs discriminately model and utilize the inter- and intra-sentence context and more effectively adapt LLMs to context-aware NMT.

Machine Translation NMT +1

From Handcrafted Features to LLMs: A Brief Survey for Machine Translation Quality Estimation

no code implementations21 Mar 2024 Haofei Zhao, Yilun Liu, Shimin Tao, Weibin Meng, Yimeng Chen, Xiang Geng, Chang Su, Min Zhang, Hao Yang

Machine Translation Quality Estimation (MTQE) is the task of estimating the quality of machine-translated text in real time without the need for reference translations, which is of great importance for the development of MT.

Machine Translation Sentence

Cannot find the paper you are looking for? You can Submit a new open access paper.