Search Results for author: Tobias Domhan

Found 9 papers, 8 papers with code

Sockeye 2: A Toolkit for Neural Machine Translation

1 code implementation EAMT 2020 Felix Hieber, Tobias Domhan, Michael Denkowski, David Vilar

We present Sockeye 2, a modernized and streamlined version of the Sockeye neural machine translation (NMT) toolkit.

Machine Translation Translation

The Devil is in the Details: On the Pitfalls of Vocabulary Selection in Neural Machine Translation

1 code implementation13 May 2022 Tobias Domhan, Eva Hasler, Ke Tran, Sony Trenous, Bill Byrne, Felix Hieber

Vocabulary selection, or lexical shortlisting, is a well-known technique to improve latency of Neural Machine Translation models by constraining the set of allowed output words during inference.

Machine Translation Translation

Image Captioning as Neural Machine Translation Task in SOCKEYE

1 code implementation9 Oct 2018 Loris Bazzani, Tobias Domhan, Felix Hieber

Image captioning is an interdisciplinary research problem that stands between computer vision and natural language processing.

Image Captioning Machine Translation +1

How Much Attention Do You Need? A Granular Analysis of Neural Machine Translation Architectures

1 code implementation ACL 2018 Tobias Domhan

With recent advances in network architectures for Neural Machine Translation (NMT) recurrent models have effectively been replaced by either convolutional or self-attentional approaches, such as in the Transformer.

Machine Translation Translation

Sockeye: A Toolkit for Neural Machine Translation

16 code implementations15 Dec 2017 Felix Hieber, Tobias Domhan, Michael Denkowski, David Vilar, Artem Sokolov, Ann Clifton, Matt Post

Written in Python and built on MXNet, the toolkit offers scalable training and inference for the three most prominent encoder-decoder architectures: attentional recurrent neural networks, self-attentional transformers, and fully convolutional networks.

Machine Translation Translation

Using Target-side Monolingual Data for Neural Machine Translation through Multi-task Learning

1 code implementation EMNLP 2017 Tobias Domhan, Felix Hieber

The performance of Neural Machine Translation (NMT) models relies heavily on the availability of sufficient amounts of parallel data, and an efficient and effective way of leveraging the vastly available amounts of monolingual data has yet to be found.

Language Modelling Machine Translation +2

Cannot find the paper you are looking for? You can Submit a new open access paper.