In this work, we propose a novel BiTIIMT system, Bilingual Text-Infilling for Interactive Neural Machine Translation.
In Effidit, we significantly expand the capacities of a writing assistant by providing functions in five categories: text completion, error checking, text polishing, keywords to sentences (K2S), and cloud input methods (cloud IME).
To this end, we study the dynamic relationship between the encoded linguistic information and task performance from the viewpoint of Pareto Optimality.
Current practices in metric evaluation focus on one single dataset, e. g., Newstest dataset in each year's WMT Metrics Shared Task.
In this paper, we propose the task of general word-level autocompletion (GWLAN) from a real-world CAT scenario, and construct the first public benchmark to facilitate research in this topic.
Automatic machine translation is super efficient to produce translations yet their quality is not guaranteed.
However, we argue that there are gaps between the predictor and the estimator in both data quality and training objectives, which preclude QE models from benefiting from a large number of parallel corpora more directly.
We propose a touch-based editing method for translation, which is more flexible than traditional keyboard-mouse-based translation postediting.
Many efforts have been devoted to extracting constituency trees from pre-trained language models, often proceeding in two stages: feature definition and parsing.
Recently many efforts have been devoted to interpreting the black-box NMT models, but little progress has been made on metrics to evaluate explanation methods.
Many Data Augmentation (DA) methods have been proposed for neural machine translation.
This paper first provides a method to identify source and target contexts and then introduce a gate mechanism to control the source and target contributions in Transformer.
Experiments show that our approach can indeed improve the translation quality with the automatically generated constraints.
Current Neural Machine Translation (NMT) employs a language-specific encoder to represent the source sentence and adopts a language-specific decoder to generate target translation.
Terms extensively exist in specific domains, and term translation plays a critical role in domain-specific machine translation (MT) tasks.