Document Level Machine Translation
14 papers with code • 1 benchmarks • 1 datasets
Most implemented papers
BlonDe: An Automatic Evaluation Metric for Document-level Machine Translation
Standard automatic metrics, e. g. BLEU, are not reliable for document-level MT evaluation.
Using Coreference Links to Improve Spanish-to-English Machine Translation
In this paper, we present a proof-of-concept implementation of a coreference-aware decoder for document-level machine translation.
A Survey on Document-level Neural Machine Translation: Methods and Evaluation
Machine translation (MT) is an important task in natural language processing (NLP) as it automates the translation process and reduces the reliance on human translators.
Towards Making the Most of Context in Neural Machine Translation
Document-level machine translation manages to outperform sentence level models by a small margin, but have failed to be widely adopted.
Measuring and Increasing Context Usage in Context-Aware Machine Translation
Recent work in neural machine translation has demonstrated both the necessity and feasibility of using inter-sentential context -- context from sentences other than those currently being translated.
G-Transformer for Document-level Machine Translation
However, study shows that when we further enlarge the translation unit to a whole document, supervised training of Transformer can fail.
DiscoScore: Evaluating Text Generation with BERT and Discourse Coherence
Still, recent BERT-based evaluation metrics are weak in recognizing coherence, and thus are not reliable in a way to spot the discourse-level improvements of those text generation systems.
Modeling Context With Linear Attention for Scalable Document-Level Translation
Document-level machine translation leverages inter-sentence dependencies to produce more coherent and consistent translations.
A Bilingual Parallel Corpus with Discourse Annotations
The BWB corpus consists of Chinese novels translated by experts into English, and the annotated test set is designed to probe the ability of machine translation systems to model various discourse phenomena.
Document-Level Machine Translation with Large Language Models
Large language models (LLMs) such as ChatGPT can produce coherent, cohesive, relevant, and fluent answers for various natural language processing (NLP) tasks.