Search Results for author: Ewald van der Westhuizen

Found 12 papers, 1 papers with code

Building a Unified Code-Switching ASR System for South African Languages

no code implementations28 Jul 2018 Emre Yilmaz, Astik Biswas, Ewald van der Westhuizen, Febe De Wet, Thomas Niesler

We present our first efforts towards building a single multilingual automatic speech recognition (ASR) system that can process code-switching (CS) speech in five languages spoken within the same population.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

Semi-supervised acoustic model training for five-lingual code-switched ASR

no code implementations20 Jun 2019 Astik Biswas, Emre Yilmaz, Febe De Wet, Ewald van der Westhuizen, Thomas Niesler

Furthermore, because English is common to all language pairs in our data, it dominates when training a unified language model, leading to improved English ASR performance at the expense of the other languages.

Acoustic Modelling Language Modelling

Semi-supervised Development of ASR Systems for Multilingual Code-switched Speech in Under-resourced Languages

no code implementations LREC 2020 Astik Biswas, Emre Yilmaz, Febe De Wet, Ewald van der Westhuizen, Thomas Niesler

This paper reports on the semi-supervised development of acoustic and language models for under-resourced, code-switched speech in five South African languages.

Multilingual Bottleneck Features for Improving ASR Performance of Code-Switched Speech in Under-Resourced Languages

1 code implementation31 Oct 2020 Trideba Padhi, Astik Biswas, Febe De Wet, Ewald van der Westhuizen, Thomas Niesler

In this work, we explore the benefits of using multilingual bottleneck features (mBNF) in acoustic modelling for the automatic speech recognition of code-switched (CS) speech in African languages.

Acoustic Modelling Automatic Speech Recognition +2

Feature learning for efficient ASR-free keyword spotting in low-resource languages

no code implementations13 Aug 2021 Ewald van der Westhuizen, Herman Kamper, Raghav Menon, John Quinn, Thomas Niesler

We show that, using these features, the CNN-DTW keyword spotter performs almost as well as the DTW keyword spotter while outperforming a baseline CNN trained only on the keyword templates.

Dynamic Time Warping Humanitarian +1

Multilingual training set selection for ASR in under-resourced Malian languages

no code implementations13 Aug 2021 Ewald van der Westhuizen, Trideba Padhi, Thomas Niesler

We find that, although maximising the training pool by including all six additional languages provides improved speech recognition in both target languages, substantially better performance can be achieved by a more judicious choice.

Humanitarian speech-recognition +1

Cannot find the paper you are looking for? You can Submit a new open access paper.