no code implementations • WMT (EMNLP) 2020 • Jeremy Gwinnup, Tim Anderson
This report summarizes the Air Force Research Laboratory (AFRL) machine translation (MT) systems submitted to the news-translation task as part of the 2020 Conference on Machine Translation (WMT20) evaluation campaign.
no code implementations • WMT (EMNLP) 2021 • Grant Erdmann, Jeremy Gwinnup, Tim Anderson
This paper describes the Air Force Research Laboratory (AFRL) machine translation sys- tems and the improvements that were developed during the WMT21 evaluation campaign.
no code implementations • IWSLT 2016 • Michaeel Kazi, Elizabeth Salesky, Brian Thompson, Jonathan Taylor, Jeremy Gwinnup, Timothy Anderson, Grant Erdmann, Eric Hansen, Brian Ore, Katherine Young, Michael Hutt
This report summarizes the MITLL-AFRL MT and ASR systems and the experiments run during the 2016 IWSLT evaluation campaign.
no code implementations • IWSLT (EMNLP) 2018 • Brian Ore, Eric Hansen, Katherine Young, Grant Erdmann, Jeremy Gwinnup
This report summarizes the Air Force Research Laboratory (AFRL) machine translation (MT) and automatic speech recognition (ASR) systems submitted to the spoken language translation (SLT) and low-resource MT tasks as part of the IWSLT18 evaluation campaign.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+3
no code implementations • 5 Mar 2024 • Vipin Vijayan, Braeden Bowen, Scott Grigsby, Timothy Anderson, Jeremy Gwinnup
Therefore, we propose that MMT models be evaluated using 1) the CoMMuTE evaluation framework, which measures the use of visual information by MMT models, 2) the text-only WMT news translation task test sets, which evaluates translation performance against complex sentences, and 3) the Multi30k test sets, for measuring MMT model performance against a real MMT dataset.
no code implementations • 5 Mar 2024 • Vipin Vijayan, Braeden Bowen, Scott Grigsby, Timothy Anderson, Jeremy Gwinnup
While most current work in multimodal machine translation (MMT) uses the Multi30k dataset for training and evaluation, we find that the resulting models overfit to the Multi30k dataset to an extreme degree.
no code implementations • 5 Mar 2024 • Braeden Bowen, Vipin Vijayan, Scott Grigsby, Timothy Anderson, Jeremy Gwinnup
The challenge of visual grounding and masking in multimodal machine translation (MMT) systems has encouraged varying approaches to the detection and selection of visually-grounded text tokens for masking.
no code implementations • 12 Jun 2023 • Jeremy Gwinnup, Kevin Duh
Large language models such as BERT and the GPT series started a paradigm shift that calls for building general-purpose models via pre-training on large datasets, followed by fine-tuning on task-specific datasets.
1 code implementation • 11 Sep 2022 • Nicholas Kashani Motlagh, Jim Davis, Tim Anderson, Jeremy Gwinnup
We propose a new Reject Option Classification technique to identify and remove regions of uncertainty in the decision space for a given neural classifier and dataset.
no code implementations • WS 2020 • Brian Ore, Eric Hansen, Tim Anderson, Jeremy Gwinnup
This report summarizes the Air Force Research Laboratory (AFRL) submission to the offline spoken language translation (SLT) task as part of the IWSLT 2020 evaluation campaign.
no code implementations • WS 2019 • Jeremy Gwinnup, Grant Erdmann, Tim Anderson
This paper describes the Air Force Research Laboratory (AFRL) machine translation systems and the improvements that were developed during the WMT19 evaluation campaign.
no code implementations • WS 2019 • Grant Erdmann, Jeremy Gwinnup
The WMT19 Parallel Corpus Filtering For Low-Resource Conditions Task aims to test various methods of filtering a noisy parallel corpora, to make them useful for training machine translation systems.
no code implementations • NAACL 2019 • Brian Thompson, Jeremy Gwinnup, Huda Khayrallah, Kevin Duh, Philipp Koehn
Continued training is an effective method for domain adaptation in neural machine translation.
1 code implementation • 2 Nov 2018 • Xuan Zhang, Gaurav Kumar, Huda Khayrallah, Kenton Murray, Jeremy Gwinnup, Marianna J. Martindale, Paul McNamee, Kevin Duh, Marine Carpuat
Machine translation systems based on deep neural networks are expensive to train.
no code implementations • WS 2018 • Jeremy Gwinnup, Tim Anderson, Grant Erdmann, Katherine Young
This paper describes the Air Force Research Laboratory (AFRL) machine translation systems and the improvements that were developed during the WMT18 evaluation campaign.
no code implementations • WS 2018 • Grant Erdmann, Jeremy Gwinnup
The WMT 2018 Parallel Corpus Filtering Task aims to test various methods of filtering a noisy parallel corpus, to make it useful for training machine translation systems.
no code implementations • WS 2018 • Jeremy Gwinnup, S, Joshua vick, Michael Hutt, Grant Erdmann, John Duselis, James Davis
AFRL-Ohio State extends its usage of visual domain-driven machine translation for use as a peer with traditional machine translation systems.
1 code implementation • WS 2018 • Brian Thompson, Huda Khayrallah, Antonios Anastasopoulos, Arya D. McCarthy, Kevin Duh, Rebecca Marvin, Paul McNamee, Jeremy Gwinnup, Tim Anderson, Philipp Koehn
To better understand the effectiveness of continued training, we analyze the major components of a neural machine translation system (the encoder, decoder, and each embedding space) and consider each component's contribution to, and capacity for, domain adaptation.