no code implementations • 2 Apr 2025 • Jonas F. Lotz, Hendra Setiawan, Stephan Peitz, Yova Kementchedjhieva
Subword tokenization requires balancing computational efficiency and vocabulary coverage, which often leads to suboptimal performance on languages and scripts not prioritized during training.
no code implementations • 4 May 2023 • Telmo Pessoa Pires, Robin M. Schmidt, Yi-Hsiu Liao, Stephan Peitz
Multilingual Machine Translation promises to improve translation quality between non-English languages.
no code implementations • 25 Apr 2023 • Ali Vardasbi, Telmo Pessoa Pires, Robin M. Schmidt, Stephan Peitz
Structured State Spaces for Sequences (S4) is a recently proposed sequence model with successful applications in various tasks, e. g. vision, language modeling, and audio.
no code implementations • 21 May 2022 • Robin M. Schmidt, Telmo Pires, Stephan Peitz, Jonas Lööf
Non-autoregressive approaches aim to improve the inference speed of translation models by only requiring a single forward pass to generate the output sequence instead of iteratively producing each predicted token.
1 code implementation • IJCNLP 2019 • Sarthak Garg, Stephan Peitz, Udhyakumar Nallasamy, Matthias Paulik
The state of the art in machine translation (MT) is governed by neural approaches, which typically provide superior translation accuracy over statistical approaches.
no code implementations • WS 2015 • Markus Freitag, Jan-Thorsten Peter, Stephan Peitz, Minwei Feng, Hermann Ney
In this paper, we enhance the traditional confusion network system combination approach with an additional model trained by a neural network.