no code implementations • EACL 2017 • Iacer Calixto, Daniel Stein, Evgeny Matusov, Pintu Lohar, Sheila Castilho, Andy Way
We evaluate our models quantitatively using BLEU and TER and find that (i) additional synthetic data has a general positive impact on text-only and multi-modal NMT models, and that (ii) using a multi-modal NMT model for re-ranking n-best lists improves TER significantly across different n-best list sizes.
no code implementations • WS 2017 • Iacer Calixto, Daniel Stein, Evgeny Matusov, Sheila Castilho, Andy Way
Nonetheless, human evaluators ranked translations from a multi-modal NMT model as better than those of a text-only NMT over 88{\%} of the time, which suggests that images do help NMT in this use-case.
no code implementations • LREC 2014 • Michael Stadtschnitzer, Jochen Schwenninger, Daniel Stein, Joachim Koehler
In this paper we describe the large-scale German broadcast corpus (GER-TV1000h) containing more than 1, 000 hours of transcribed speech data.
no code implementations • LREC 2012 • Daniel Stein, Bela Usabaev
For a reliable keyword extraction on firefighter radio communication, a strong automatic speech recognition system is needed.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +4