Search Results for author: Jeremy Gwinnup

Found 26 papers, 3 papers with code

The AFRL IWSLT 2018 Systems: What Worked, What Didn’t

no code implementations IWSLT (EMNLP) 2018 Brian Ore, Eric Hansen, Katherine Young, Grant Erdmann, Jeremy Gwinnup

This report summarizes the Air Force Research Laboratory (AFRL) machine translation (MT) and automatic speech recognition (ASR) systems submitted to the spoken language translation (SLT) and low-resource MT tasks as part of the IWSLT18 evaluation campaign.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

The AFRL WMT20 News Translation Systems

no code implementations WMT (EMNLP) 2020 Jeremy Gwinnup, Tim Anderson

This report summarizes the Air Force Research Laboratory (AFRL) machine translation (MT) systems submitted to the news-translation task as part of the 2020 Conference on Machine Translation (WMT20) evaluation campaign.

Machine Translation Translation

Tune in: The AFRL WMT21 News-Translation Systems

no code implementations WMT (EMNLP) 2021 Grant Erdmann, Jeremy Gwinnup, Tim Anderson

This paper describes the Air Force Research Laboratory (AFRL) machine translation sys- tems and the improvements that were developed during the WMT21 evaluation campaign.

Machine Translation Translation

Adding Multimodal Capabilities to a Text-only Translation Model

no code implementations5 Mar 2024 Vipin Vijayan, Braeden Bowen, Scott Grigsby, Timothy Anderson, Jeremy Gwinnup

While most current work in multimodal machine translation (MMT) uses the Multi30k dataset for training and evaluation, we find that the resulting models overfit to the Multi30k dataset to an extreme degree.

Multimodal Machine Translation Translation

Detecting Concrete Visual Tokens for Multimodal Machine Translation

no code implementations5 Mar 2024 Braeden Bowen, Vipin Vijayan, Scott Grigsby, Timothy Anderson, Jeremy Gwinnup

The challenge of visual grounding and masking in multimodal machine translation (MMT) systems has encouraged varying approaches to the detection and selection of visually-grounded text tokens for masking.

Multimodal Machine Translation object-detection +3

The Case for Evaluating Multimodal Translation Models on Text Datasets

no code implementations5 Mar 2024 Vipin Vijayan, Braeden Bowen, Scott Grigsby, Timothy Anderson, Jeremy Gwinnup

Therefore, we propose that MMT models be evaluated using 1) the CoMMuTE evaluation framework, which measures the use of visual information by MMT models, 2) the text-only WMT news translation task test sets, which evaluates translation performance against complex sentences, and 3) the Multi30k test sets, for measuring MMT model performance against a real MMT dataset.

Descriptive Image Captioning +2

A Survey of Vision-Language Pre-training from the Lens of Multimodal Machine Translation

no code implementations12 Jun 2023 Jeremy Gwinnup, Kevin Duh

Large language models such as BERT and the GPT series started a paradigm shift that calls for building general-purpose models via pre-training on large datasets, followed by fine-tuning on task-specific datasets.

Image Captioning Multimodal Machine Translation +3

Learning When to Say "I Don't Know"

1 code implementation11 Sep 2022 Nicholas Kashani Motlagh, Jim Davis, Tim Anderson, Jeremy Gwinnup

We propose a new Reject Option Classification technique to identify and remove regions of uncertainty in the decision space for a given neural classifier and dataset.

text-classification Text Classification

The AFRL IWSLT 2020 Systems: Work-From-Home Edition

no code implementations WS 2020 Brian Ore, Eric Hansen, Tim Anderson, Jeremy Gwinnup

This report summarizes the Air Force Research Laboratory (AFRL) submission to the offline spoken language translation (SLT) task as part of the IWSLT 2020 evaluation campaign.

Action Detection Activity Detection +9

The AFRL WMT19 Systems: Old Favorites and New Tricks

no code implementations WS 2019 Jeremy Gwinnup, Grant Erdmann, Tim Anderson

This paper describes the Air Force Research Laboratory (AFRL) machine translation systems and the improvements that were developed during the WMT19 evaluation campaign.

Domain Adaptation Machine Translation +1

Quality and Coverage: The AFRL Submission to the WMT19 Parallel Corpus Filtering for Low-Resource Conditions Task

no code implementations WS 2019 Grant Erdmann, Jeremy Gwinnup

The WMT19 Parallel Corpus Filtering For Low-Resource Conditions Task aims to test various methods of filtering a noisy parallel corpora, to make them useful for training machine translation systems.

Machine Translation Translation

Coverage and Cynicism: The AFRL Submission to the WMT 2018 Parallel Corpus Filtering Task

no code implementations WS 2018 Grant Erdmann, Jeremy Gwinnup

The WMT 2018 Parallel Corpus Filtering Task aims to test various methods of filtering a noisy parallel corpus, to make it useful for training machine translation systems.

Machine Translation Translation

The AFRL WMT18 Systems: Ensembling, Continuation and Combination

no code implementations WS 2018 Jeremy Gwinnup, Tim Anderson, Grant Erdmann, Katherine Young

This paper describes the Air Force Research Laboratory (AFRL) machine translation systems and the improvements that were developed during the WMT18 evaluation campaign.

Machine Translation Translation

Freezing Subnetworks to Analyze Domain Adaptation in Neural Machine Translation

1 code implementation WS 2018 Brian Thompson, Huda Khayrallah, Antonios Anastasopoulos, Arya D. McCarthy, Kevin Duh, Rebecca Marvin, Paul McNamee, Jeremy Gwinnup, Tim Anderson, Philipp Koehn

To better understand the effectiveness of continued training, we analyze the major components of a neural machine translation system (the encoder, decoder, and each embedding space) and consider each component's contribution to, and capacity for, domain adaptation.

Domain Adaptation Machine Translation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.