Search Results for author: Parnia Bahar

Found 13 papers, 2 papers with code

Two-Way Neural Machine Translation: A Proof of Concept for Bidirectional Translation Modeling using a Two-Dimensional Grid

no code implementations24 Nov 2020 Parnia Bahar, Christopher Brix, Hermann Ney

Neural translation models have proven to be effective in capturing sufficient information from a source sentence and generating a high-quality target sentence.

Machine Translation Translation

Tight Integrated End-to-End Training for Cascaded Speech Translation

no code implementations24 Nov 2020 Parnia Bahar, Tobias Bieschke, Ralf Schlüter, Hermann Ney

Direct speech translation is an alternative method to avoid error propagation; however, its performance is often behind the cascade system.

Translation

Successfully Applying the Stabilized Lottery Ticket Hypothesis to the Transformer Architecture

no code implementations ACL 2020 Christopher Brix, Parnia Bahar, Hermann Ney

Sparse models require less memory for storage and enable a faster inference by reducing the necessary number of FLOPs.

On using 2D sequence-to-sequence models for speech recognition

no code implementations20 Nov 2019 Parnia Bahar, Albert Zeyer, Ralf Schlüter, Hermann Ney

Attention-based sequence-to-sequence models have shown promising results in automatic speech recognition.

Speech Recognition

A Comparative Study on End-to-end Speech to Text Translation

no code implementations20 Nov 2019 Parnia Bahar, Tobias Bieschke, Hermann Ney

Recent advances in deep learning show that end-to-end speech to text translation model is a promising approach to direct the speech translation field.

Speech-to-Text Translation Translation

On Using SpecAugment for End-to-End Speech Translation

no code implementations20 Nov 2019 Parnia Bahar, Albert Zeyer, Ralf Schlüter, Hermann Ney

This work investigates a simple data augmentation technique, SpecAugment, for end-to-end speech translation.

Data Augmentation Translation

The RWTH Aachen University Machine Translation Systems for WMT 2019

no code implementations WS 2019 Jan Rosendahl, Christian Herold, Yunsu Kim, Miguel Gra{\c{c}}a, Weiyue Wang, Parnia Bahar, Yingbo Gao, Hermann Ney

For the De-En task, none of the tested methods gave a significant improvement over last years winning system and we end up with the same performance, resulting in 39. 6{\%} BLEU on newstest2019.

Language Modelling Machine Translation +2

Towards Two-Dimensional Sequence to Sequence Model in Neural Machine Translation

1 code implementation EMNLP 2018 Parnia Bahar, Christopher Brix, Hermann Ney

This work investigates an alternative model for neural machine translation (NMT) and proposes a novel architecture, where we employ a multi-dimensional long short-term memory (MDLSTM) for translation modeling.

Machine Translation Translation

The RWTH Aachen University Supervised Machine Translation Systems for WMT 2018

1 code implementation WS 2018 Julian Schamper, Jan Rosendahl, Parnia Bahar, Yunsu Kim, Arne Nix, Hermann Ney

In total we improve by 6. 8{\%} BLEU over our last year{'}s submission and by 4. 8{\%} BLEU over the winning system of the 2017 German→English task.

Machine Translation Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.