Search Results for author: Thomas Zenkel

Found 9 papers, 3 papers with code

Adding Interpretable Attention to Neural Translation Models Improves Word Alignment

1 code implementation31 Jan 2019 Thomas Zenkel, Joern Wuebker, John DeNero

Multi-layer models with multiple attention heads per layer provide superior translation quality compared to simpler and shallower models, but determining what source context is most relevant to each target word is more challenging as a result.

Machine Translation Translation +1

Subword and Crossword Units for CTC Acoustic Models

no code implementations19 Dec 2017 Thomas Zenkel, Ramon Sanabria, Florian Metze, Alex Waibel

This paper proposes a novel approach to create an unit set for CTC based speech recognition systems.

Language Modelling speech-recognition +1

End-to-End Neural Word Alignment Outperforms GIZA++

no code implementations ACL 2020 Thomas Zenkel, Joern Wuebker, John DeNero

Although unnecessary for training neural MT models, word alignment still plays an important role in interactive applications of neural machine translation, such as annotation transfer and lexicon injection.

Machine Translation Translation +1

Automatic Bilingual Markup Transfer

2 code implementations Findings (EMNLP) 2021 Thomas Zenkel, Joern Wuebker, John DeNero

We describe the task of bilingual markup transfer, which involves placing markup tags from a source sentence into a fixed target translation.

Machine Translation Sentence +1

The 2016 KIT IWSLT Speech-to-Text Systems for English and German

no code implementations IWSLT 2016 Thai-Son Nguyen, Markus Müller, Matthias Sperber, Thomas Zenkel, Kevin Kilgour, Sebastian Stüker, Alex Waibel

For the English TED task, our best combination system has a WER of 7. 8% on the development set while our other combinations gained 21. 8% and 28. 7% WERs for the English and German MSLT tasks.

The 2017 KIT IWSLT Speech-to-Text Systems for English and German

no code implementations IWSLT 2017 Thai-Son Nguyen, Markus Müller, Matthias Sperber, Thomas Zenkel, Sebastian Stüker, Alex Waibel

For the English lecture task, our best combination system has a WER of 8. 3% on the tst2015 development set while our other combinations gained 25. 7% WER for German lecture tasks.

Cannot find the paper you are looking for? You can Submit a new open access paper.