no code implementations • LREC 2022 • Felix Burkhardt, Anabell Hacker, Uwe Reichel, Hagen Wierstorf, Florian Eyben, Björn Schuller
Since several decades emotional databases have been recorded by various laboratories.
1 code implementation • LREC 2022 • Felix Burkhardt, Johannes Wagner, Hagen Wierstorf, Florian Eyben, Björn Schuller
We present advancements with a software tool called Nkululeko, that lets users perform (semi-) supervised machine learning experiments in the speaker characteristics domain.
no code implementations • 11 Dec 2023 • Anna Derington, Hagen Wierstorf, Ali Özkil, Florian Eyben, Felix Burkhardt, Björn W. Schuller
Machine learning models for speech emotion recognition (SER) can be trained for different tasks and are usually evaluated on the basis of a few available datasets per task.
1 code implementation • 1 Mar 2023 • Hagen Wierstorf, Johannes Wagner, Florian Eyben, Felix Burkhardt, Björn W. Schuller
Driven by the need for larger and more diverse datasets to pre-train and fine-tune increasingly complex machine learning models, the number of datasets is rapidly growing.
no code implementations • 1 Apr 2022 • Andreas Triantafyllopoulos, Johannes Wagner, Hagen Wierstorf, Maximilian Schmitt, Uwe Reichel, Florian Eyben, Felix Burkhardt, Björn W. Schuller
Large, pre-trained neural networks consisting of self-attention layers (transformers) have recently achieved state-of-the-art results on several speech emotion recognition (SER) datasets.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +2
1 code implementation • 14 Mar 2022 • Johannes Wagner, Andreas Triantafyllopoulos, Hagen Wierstorf, Maximilian Schmitt, Felix Burkhardt, Florian Eyben, Björn W. Schuller
Recent advances in transformer-based architectures which are pre-trained in self-supervised manner have shown great promise in several machine learning tasks.
no code implementations • 1 Nov 2018 • Emad M. Grais, Hagen Wierstorf, Dominic Ward, Russell Mason, Mark D. Plumbley
Current performance evaluation for audio source separation depends on comparing the processed or separated signals with reference signals.
no code implementations • 28 Oct 2017 • Emad M. Grais, Hagen Wierstorf, Dominic Ward, Mark D. Plumbley
In deep neural networks with convolutional layers, each layer typically has fixed-size/single-resolution receptive field (RF).