Search Results for author: Benjamin van Niekerk

Found 16 papers, 6 papers with code

LinearVC: Linear transformations of self-supervised features through the lens of voice conversion

no code implementations2 Jun 2025 Herman Kamper, Benjamin van Niekerk, Julian Zaïdi, Marc-André Carbonneau

We introduce LinearVC, a simple voice conversion method that sheds light on the structure of self-supervised representations.

Voice Conversion

Unsupervised Word Discovery: Boundary Detection with Clustering vs. Dynamic Programming

no code implementations22 Sep 2024 Simon Malan, Benjamin van Niekerk, Herman Kamper

We look at the long-standing problem of segmenting unlabeled speech into word-like segments and clustering these into a lexicon.

Boundary Detection Clustering

Spoken-Term Discovery using Discrete Speech Units

1 code implementation26 Aug 2024 Benjamin van Niekerk, Julian Zaïdi, Marc-André Carbonneau, Herman Kamper

Discovering a lexicon from unlabeled audio is a longstanding challenge for zero-resource speech processing.

Rhythm Modeling for Voice Conversion

1 code implementation12 Jul 2023 Benjamin van Niekerk, Marc-André Carbonneau, Herman Kamper

Voice conversion aims to transform source speech into a different target voice.

Rhythm Voice Conversion

Voice Conversion With Just Nearest Neighbors

1 code implementation30 May 2023 Matthew Baas, Benjamin van Niekerk, Herman Kamper

Any-to-any voice conversion aims to transform source speech into a target voice with just a few examples of the target speaker as a reference.

 Ranked #1 on Voice Conversion on LibriSpeech test-clean (using extra training data)

Voice Conversion

Visually grounded few-shot word acquisition with fewer shots

no code implementations25 May 2023 Leanne Nortje, Benjamin van Niekerk, Herman Kamper

Our approach involves using the given word-image example pairs to mine new unsupervised word-image training pairs from large collections of unlabelled speech and images.

Towards localisation of keywords in speech using weak supervision

no code implementations14 Dec 2020 Kayode Olaleye, Benjamin van Niekerk, Herman Kamper

Of the two forms of supervision, the visually trained model performs worse than the BoW-trained model.

Towards unsupervised phone and word segmentation using self-supervised vector-quantized neural networks

no code implementations14 Dec 2020 Herman Kamper, Benjamin van Niekerk

We specifically constrain pretrained self-supervised vector-quantized (VQ) neural networks so that blocks of contiguous feature vectors are assigned to the same code, thereby giving a variable-rate segmentation of the speech into discrete units.

Clustering Segmentation

Online Constrained Model-based Reinforcement Learning

no code implementations7 Apr 2020 Benjamin van Niekerk, Andreas Damianou, Benjamin Rosman

The environment's dynamics are learned from limited training data and can be reused in new task instances without retraining.

Autonomous Racing Gaussian Processes +5

If dropout limits trainable depth, does critical initialisation still matter? A large-scale statistical analysis on ReLU networks

no code implementations13 Oct 2019 Arnu Pretorius, Elan van Biljon, Benjamin van Niekerk, Ryan Eloff, Matthew Reynard, Steve James, Benjamin Rosman, Herman Kamper, Steve Kroon

Our results therefore suggest that, in the shallow-to-moderate depth setting, critical initialisation provides zero performance gains when compared to off-critical initialisations and that searching for off-critical initialisations that might improve training speed or generalisation, is likely to be a fruitless endeavour.

Will it Blend? Composing Value Functions in Reinforcement Learning

no code implementations12 Jul 2018 Benjamin van Niekerk, Steven James, Adam Earle, Benjamin Rosman

An important property for lifelong-learning agents is the ability to combine existing skills to solve unseen tasks.

Lifelong learning reinforcement-learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.