Search Results for author: Guoping Huang

Found 20 papers, 3 papers with code

Language-Independent Representor for Neural Machine Translation

no code implementations1 Nov 2018 Long Zhou, Yuchen Liu, Jiajun Zhang, Cheng-qing Zong, Guoping Huang

Current Neural Machine Translation (NMT) employs a language-specific encoder to represent the source sentence and adopts a language-specific decoder to generate target translation.

Machine Translation Multi-Task Learning +3

Learning from Parenthetical Sentences for Term Translation in Machine Translation

no code implementations WS 2017 Guoping Huang, Jiajun Zhang, Yu Zhou, Cheng-qing Zong

Terms extensively exist in specific domains, and term translation plays a critical role in domain-specific machine translation (MT) tasks.

Machine Translation Sentence +1

Neural Machine Translation with Noisy Lexical Constraints

no code implementations13 Aug 2019 Huayang Li, Guoping Huang, Deng Cai, Lemao Liu

Experiments show that our approach can indeed improve the translation quality with the automatically generated constraints.

Machine Translation Open-Ended Question Answering +1

Regularized Context Gates on Transformer for Machine Translation

no code implementations ACL 2020 Xintong Li, Lemao Liu, Rui Wang, Guoping Huang, Max Meng

This paper first provides a method to identify source and target contexts and then introduce a gate mechanism to control the source and target contributions in Transformer.

Machine Translation NMT +1

Evaluating Explanation Methods for Neural Machine Translation

no code implementations ACL 2020 Jierui Li, Lemao Liu, Huayang Li, Guanlin Li, Guoping Huang, Shuming Shi

Recently many efforts have been devoted to interpreting the black-box NMT models, but little progress has been made on metrics to evaluate explanation methods.

Machine Translation NMT +2

On the Branching Bias of Syntax Extracted from Pre-trained Language Models

no code implementations Findings of the Association for Computational Linguistics 2020 Huayang Li, Lemao Liu, Guoping Huang, Shuming Shi

Many efforts have been devoted to extracting constituency trees from pre-trained language models, often proceeding in two stages: feature definition and parsing.

DirectQE: Direct Pretraining for Machine Translation Quality Estimation

no code implementations15 May 2021 Qu Cui, ShuJian Huang, Jiahuan Li, Xiang Geng, Zaixiang Zheng, Guoping Huang, Jiajun Chen

However, we argue that there are gaps between the predictor and the estimator in both data quality and training objectives, which preclude QE models from benefiting from a large number of parallel corpora more directly.

Machine Translation Translation

GWLAN: General Word-Level AutocompletioN for Computer-Aided Translation

no code implementations ACL 2021 Huayang Li, Lemao Liu, Guoping Huang, Shuming Shi

In this paper, we propose the task of general word-level autocompletion (GWLAN) from a real-world CAT scenario, and construct the first public benchmark to facilitate research in this topic.

Sentence Translation

Effidit: Your AI Writing Assistant

no code implementations3 Aug 2022 Shuming Shi, Enbo Zhao, Duyu Tang, Yan Wang, Piji Li, Wei Bi, Haiyun Jiang, Guoping Huang, Leyang Cui, Xinting Huang, Cong Zhou, Yong Dai, Dongyang Ma

In Effidit, we significantly expand the capacities of a writing assistant by providing functions in five categories: text completion, error checking, text polishing, keywords to sentences (K2S), and cloud input methods (cloud IME).

Keywords to Sentences Retrieval +3

Rethinking Translation Memory Augmented Neural Machine Translation

no code implementations12 Jun 2023 Hongkun Hao, Guoping Huang, Lemao Liu, Zhirui Zhang, Shuming Shi, Rui Wang

The finding demonstrates that TM-augmented NMT is good at the ability of fitting data (i. e., lower bias) but is more sensitive to the fluctuations in the training data (i. e., higher variance), which provides an explanation to a recently reported contradictory phenomenon on the same translation task: TM-augmented NMT substantially advances vanilla NMT under the high-resource scenario whereas it fails under the low-resource scenario.

Machine Translation NMT +2

Cannot find the paper you are looking for? You can Submit a new open access paper.