no code implementations • EACL (DravidianLangTech) 2021 • Wanying Xie
In this paper, we describe the GX system in the EACL2021 shared task on machine translation in Dravidian languages.
no code implementations • WMT (EMNLP) 2021 • Han Yang, Bojie Hu, Wanying Xie, Ambyera Han, Pan Liu, Jinan Xu, Qi Ju
This paper describes TenTrans’ submission to WMT21 Multilingual Low-Resource Translation shared task for the Romance language pairs.
no code implementations • WMT (EMNLP) 2021 • Wanying Xie, Bojie Hu, Han Yang, Dong Yu, Qi Ju
This paper describes TenTrans large-scale multilingual machine translation system for WMT 2021.
1 code implementation • SEMEVAL 2021 • Wanying Xie
This paper presents the GX system for the Multilingual and Cross-lingual Word-in-Context Disambiguation (MCL-WiC) task.
1 code implementation • ACL 2021 • Wanying Xie, Yang Feng, Shuhao Gu, Dong Yu
Multilingual neural machine translation with a single model has drawn much attention due to its capability to deal with multiple languages.
1 code implementation • NAACL 2021 • Shuhao Gu, Yang Feng, Wanying Xie
Domain Adaptation is widely used in practical applications of neural machine translation, which aims to achieve good performance on both the general-domain and in-domain.
1 code implementation • EMNLP 2020 • Shuhao Gu, Jinchao Zhang, Fandong Meng, Yang Feng, Wanying Xie, Jie zhou, Dong Yu
The vanilla NMT model usually adopts trivial equal-weighted objectives for target tokens with different frequencies and tends to generate more high-frequency tokens and less low-frequency tokens compared with the golden token distribution.
1 code implementation • 30 Nov 2019 • Yang Feng, Wanying Xie, Shuhao Gu, Chenze Shao, Wen Zhang, Zhengxin Yang, Dong Yu
Neural machine translation models usually adopt the teacher forcing strategy for training which requires the predicted sequence matches ground truth word by word and forces the probability of each prediction to approach a 0-1 distribution.
no code implementations • SEMEVAL 2019 • Ruoyao Yang, Wanying Xie, Chunhua Liu, Dong Yu
Researchers have been paying increasing attention to rumour evaluation due to the rapid spread of unsubstantiated rumours on social media platforms, including SemEval 2019 task 7.
no code implementations • SEMEVAL 2019 • Wanying Xie, Mengxi Que, Ruoyao Yang, Chunhua Liu, Dong Yu
For contextual knowledge enhancement, we extend the training set of subtask A, use several features to improve the results of our system and adapt the input formats to be more suitable for this task.