Search Results for author: Qu Cui

Found 6 papers, 2 papers with code

NJU’s submission to the WMT20 QE Shared Task

no code implementations WMT (EMNLP) 2020 Qu Cui, Xiang Geng, ShuJian Huang, Jiajun Chen

This paper describes our system of the sentence-level and word-level Quality Estimation Shared Task of WMT20.

Language Modelling Sentence

Cross-lingual Contextualized Phrase Retrieval

1 code implementation25 Mar 2024 Huayang Li, Deng Cai, Zhi Qu, Qu Cui, Hidetaka Kamigaito, Lemao Liu, Taro Watanabe

In our work, we propose a new task formulation of dense retrieval, cross-lingual contextualized phrase retrieval, which aims to augment cross-lingual applications by addressing polysemy using context information.

Contrastive Learning Language Modelling +4

Simple and Scalable Nearest Neighbor Machine Translation

1 code implementation23 Feb 2023 Yuhan Dai, Zhirui Zhang, Qiuzhi Liu, Qu Cui, Weihua Li, Yichao Du, Tong Xu

$k$NN-MT is a straightforward yet powerful approach for fast domain adaptation, which directly plugs pre-trained neural machine translation (NMT) models with domain-specific token-level $k$-nearest-neighbor ($k$NN) retrieval to achieve domain adaptation without retraining.

Domain Adaptation Machine Translation +4

DirectQE: Direct Pretraining for Machine Translation Quality Estimation

no code implementations15 May 2021 Qu Cui, ShuJian Huang, Jiahuan Li, Xiang Geng, Zaixiang Zheng, Guoping Huang, Jiajun Chen

However, we argue that there are gaps between the predictor and the estimator in both data quality and training objectives, which preclude QE models from benefiting from a large number of parallel corpora more directly.

Machine Translation Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.