no code implementations • ACL (BPPF) 2021 • Alëna Aksënova, Daan van Esch, James Flynn, Pavel Golik
The applications of automatic speech recognition (ASR) systems are proliferating, in part due to recent significant quality improvements.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
no code implementations • IWSLT (EMNLP) 2018 • Evgeny Matusov, Patrick Wilken, Parnia Bahar, Julian Schamper, Pavel Golik, Albert Zeyer, Joan Albert Silvestre-Cerda, Adrià Martínez-Villaronga, Hendrik Pesch, Jan-Thorsten Peter
This work describes AppTek’s speech translation pipeline that includes strong state-of-the-art automatic speech recognition (ASR) and neural machine translation (NMT) components.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +4
no code implementations • 16 May 2022 • Alëna Aksënova, Zhehuai Chen, Chung-Cheng Chiu, Daan van Esch, Pavel Golik, Wei Han, Levi King, Bhuvana Ramabhadran, Andrew Rosenberg, Suzan Schwartz, Gary Wang
However, there are not enough data sets for accented speech, and for the ones that are already available, more training approaches need to be explored to improve the quality of accented speech recognition.
no code implementations • WS 2020 • Parnia Bahar, Patrick Wilken, Tamer Alkhouli, Andreas Guta, Pavel Golik, Evgeny Matusov, Christian Herold
AppTek and RWTH Aachen University team together to participate in the offline and simultaneous speech translation tracks of IWSLT 2020.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +4
no code implementations • WS 2020 • Patrick Wilken, Tamer Alkhouli, Evgeny Matusov, Pavel Golik
In simultaneous machine translation, the objective is to determine when to produce a partial translation given a continuous stream of source words, with a trade-off between latency and quality.
no code implementations • 14 Jun 2019 • Markus Kitza, Pavel Golik, Ralf Schlüter, Hermann Ney
Further, i-vectors were used as an input to the neural network to perform instantaneous speaker and environment adaptation, providing 8\% relative improvement in word error rate on the NIST Hub5 2000 evaluation test set.
no code implementations • 5 May 2017 • Patrick Doetsch, Pavel Golik, Hermann Ney
In this work we compare different batch construction methods for mini-batch training of recurrent neural networks.