no code implementations • WMT (EMNLP) 2020 • Jiayi Wang, Ke Wang, Kai Fan, Yuqi Zhang, Jun Lu, Xin Ge, Yangbin Shi, Yu Zhao
We also apply an imitation learning strategy to augment a reasonable amount of pseudo APE training data, potentially preventing the model to overfit on the limited real training data and boosting the performance on held-out data.
no code implementations • WMT (EMNLP) 2020 • Jun Lu, Xin Ge, Yangbin Shi, Yuqi Zhang
In the filtering task, three main methods are applied to evaluate the quality of the parallel corpus, i. e. a) Dual Bilingual GPT-2 model, b) Dual Conditional Cross-Entropy Model and c) IBM word alignment model.
no code implementations • Findings (EMNLP) 2021 • Ke Wang, Yangbin Shi, Jiayi Wang, Yuqi Zhang, Yu Zhao, Xiaolin Zheng
Quality Estimation (QE) plays an essential role in applications of Machine Translation (MT).
1 code implementation • Findings (ACL) 2021 • Jinpeng Zhang, Baijun Ji, Nini Xiao, Xiangyu Duan, Min Zhang, Yangbin Shi, Weihua Luo
Bilingual Lexicon Induction (BLI) aims to map words in one language to their translations in another, and is typically through learning linear projections to align monolingual word representation spaces.
no code implementations • WS 2018 • Jun Lu, Xiaoyu Lv, Yangbin Shi, Boxing Chen
This paper describes the Alibaba Machine Translation Group submissions to the WMT 2018 Shared Task on Parallel Corpus Filtering.
no code implementations • WS 2018 • Jiayi Wang, Kai Fan, Bo Li, Fengming Zhou, Boxing Chen, Yangbin Shi, Luo Si
The goal of WMT 2018 Shared Task on Translation Quality Estimation is to investigate automatic methods for estimating the quality of machine translation results without reference translations.