Low Resource NMT
11 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in Low Resource NMT
Most implemented papers
Revisiting Low-Resource Neural Machine Translation: A Case Study
It has been shown that the performance of neural machine translation (NMT) drops starkly in low-resource conditions, underperforming phrase-based statistical machine translation (PBSMT) and requiring large amounts of auxiliary data to achieve competitive results.
On Optimal Transformer Depth for Low-Resource Language Translation
Therefore, by showing that transformer models perform well (and often best) at low-to-moderate depth, we hope to convince fellow researchers to devote less computational resources, as well as time, to exploring overly large models during the development of these systems.
JASS: Japanese-specific Sequence to Sequence Pre-training for Neural Machine Translation
Monolingual pre-training approaches such as MASS (MAsked Sequence to Sequence) are extremely effective in boosting NMT quality for languages with small parallel corpora.
Sicilian Translator: A Recipe for Low-Resource NMT
With 17, 000 pairs of Sicilian-English translated sentences, Arba Sicula developed the first neural machine translator for the Sicilian language.
Towards Better Chinese-centric Neural Machine Translation for Low-resource Languages
The last decade has witnessed enormous improvements in science and technology, stimulating the growing demand for economic and cultural exchanges in various countries.
ConsistTL: Modeling Consistency in Transfer Learning for Low-Resource Neural Machine Translation
In this paper, we propose a novel transfer learning method for NMT, namely ConsistTL, which can continuously transfer knowledge from the parent model during the training of the child model.
Joint Dropout: Improving Generalizability in Low-Resource Neural Machine Translation through Phrase Pair Variables
Despite the tremendous success of Neural Machine Translation (NMT), its performance on low-resource language pairs still remains subpar, partly due to the limited ability to handle previously unseen inputs, i. e., generalization.
Low-resource neural machine translation with morphological modeling
An attention augmentation scheme to the transformer model is proposed in a generic form to allow integration of pre-trained language models and also facilitate modeling of word order relationships between the source and target languages.
From Priest to Doctor: Domain Adaptaion for Low-Resource Neural Machine Translation
Many of the world's languages have insufficient data to train high-performing general neural machine translation (NMT) models, let alone domain-specific models, and often the only available parallel data are small amounts of religious texts.