Search Results for author: Benjamin van Niekerk

Found 13 papers, 5 papers with code

Rhythm Modeling for Voice Conversion

1 code implementation12 Jul 2023 Benjamin van Niekerk, Marc-André Carbonneau, Herman Kamper

Voice conversion aims to transform source speech into a different target voice.

Voice Conversion

Voice Conversion With Just Nearest Neighbors

1 code implementation30 May 2023 Matthew Baas, Benjamin van Niekerk, Herman Kamper

Any-to-any voice conversion aims to transform source speech into a target voice with just a few examples of the target speaker as a reference.

 Ranked #1 on Voice Conversion on LibriSpeech test-clean (using extra training data)

Voice Conversion

Visually grounded few-shot word acquisition with fewer shots

no code implementations25 May 2023 Leanne Nortje, Benjamin van Niekerk, Herman Kamper

Our approach involves using the given word-image example pairs to mine new unsupervised word-image training pairs from large collections of unlabelled speech and images.

Towards localisation of keywords in speech using weak supervision

no code implementations14 Dec 2020 Kayode Olaleye, Benjamin van Niekerk, Herman Kamper

Of the two forms of supervision, the visually trained model performs worse than the BoW-trained model.

Towards unsupervised phone and word segmentation using self-supervised vector-quantized neural networks

no code implementations14 Dec 2020 Herman Kamper, Benjamin van Niekerk

We specifically constrain pretrained self-supervised vector-quantized (VQ) neural networks so that blocks of contiguous feature vectors are assigned to the same code, thereby giving a variable-rate segmentation of the speech into discrete units.

Clustering Segmentation

Online Constrained Model-based Reinforcement Learning

no code implementations7 Apr 2020 Benjamin van Niekerk, Andreas Damianou, Benjamin Rosman

The environment's dynamics are learned from limited training data and can be reused in new task instances without retraining.

Gaussian Processes Model-based Reinforcement Learning +2

If dropout limits trainable depth, does critical initialisation still matter? A large-scale statistical analysis on ReLU networks

no code implementations13 Oct 2019 Arnu Pretorius, Elan van Biljon, Benjamin van Niekerk, Ryan Eloff, Matthew Reynard, Steve James, Benjamin Rosman, Herman Kamper, Steve Kroon

Our results therefore suggest that, in the shallow-to-moderate depth setting, critical initialisation provides zero performance gains when compared to off-critical initialisations and that searching for off-critical initialisations that might improve training speed or generalisation, is likely to be a fruitless endeavour.

Cannot find the paper you are looking for? You can Submit a new open access paper.