Search Results for author: Danni Liu

Found 12 papers, 4 papers with code

Effective combination of pretrained models - KIT@IWSLT2022

no code implementations IWSLT (ACL) 2022 Ngoc-Quan Pham, Tuan Nam Nguyen, Thai-Binh Nguyen, Danni Liu, Carlos Mullov, Jan Niehues, Alexander Waibel

Pretrained models in acoustic and textual modalities can potentially improve speech translation for both Cascade and End-to-end approaches.


Maastricht University’s Large-Scale Multilingual Machine Translation System for WMT 2021

no code implementations WMT (EMNLP) 2021 Danni Liu, Jan Niehues

We present our development of the multilingual machine translation system for the large-scale multilingual machine translation task at WMT 2021.

Machine Translation Translation

Tackling data scarcity in speech translation using zero-shot multilingual machine translation techniques

1 code implementation26 Jan 2022 Tu Anh Dinh, Danni Liu, Jan Niehues

We investigate whether these ideas can be applied to speech translation, by building ST models trained on speech transcription and text translation data.

Data Augmentation Machine Translation +1

Cost-Effective Training in Low-Resource Neural Machine Translation

no code implementations14 Jan 2022 Sai Koneru, Danni Liu, Jan Niehues

Although AL is shown to be helpful with large budgets, it is not enough to build high-quality translation systems in these low-resource conditions.

Active Learning Domain Adaptation +2

Direct Simultaneous Speech-to-Speech Translation with Variational Monotonic Multihead Attention

no code implementations15 Oct 2021 Xutai Ma, Hongyu Gong, Danni Liu, Ann Lee, Yun Tang, Peng-Jen Chen, Wei-Ning Hsu, Phillip Koehn, Juan Pino

We present a direct simultaneous speech-to-speech translation (Simul-S2ST) model, Furthermore, the generation of translation is independent from intermediate text representations.

Speech Synthesis Speech-to-Speech Translation +1

Improving Zero-Shot Translation by Disentangling Positional Information

1 code implementation ACL 2021 Danni Liu, Jan Niehues, James Cross, Francisco Guzmán, Xian Li

The difficulty of generalizing to new translation directions suggests the model representations are highly specific to those language pairs seen in training.

Machine Translation Translation

Adapting End-to-End Speech Recognition for Readable Subtitles

1 code implementation WS 2020 Danni Liu, Jan Niehues, Gerasimos Spanakis

The experiments show that with limited data far less than needed for training a model from scratch, we can adapt a Transformer-based ASR model to incorporate both transcription and compression capabilities.

Automatic Speech Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.