no code implementations • DCLRL (LREC) 2022 • Felix Burkhardt, Florian Eyben, Björn Schuller
Speech emotion recognition is in the focus of research since several decades and has many applications.
no code implementations • LREC 2022 • Felix Burkhardt, Anabell Hacker, Uwe Reichel, Hagen Wierstorf, Florian Eyben, Björn Schuller
Since several decades emotional databases have been recorded by various laboratories.
1 code implementation • LREC 2022 • Felix Burkhardt, Johannes Wagner, Hagen Wierstorf, Florian Eyben, Björn Schuller
We present advancements with a software tool called Nkululeko, that lets users perform (semi-) supervised machine learning experiments in the speaker characteristics domain.
no code implementations • 11 Dec 2023 • Anna Derington, Hagen Wierstorf, Ali Özkil, Florian Eyben, Felix Burkhardt, Björn W. Schuller
Machine learning models for speech emotion recognition (SER) can be trained for different tasks and are usually evaluated on the basis of a few available datasets per task.
1 code implementation • 1 Mar 2023 • Hagen Wierstorf, Johannes Wagner, Florian Eyben, Felix Burkhardt, Björn W. Schuller
Driven by the need for larger and more diverse datasets to pre-train and fine-tune increasingly complex machine learning models, the number of datasets is rapidly growing.
no code implementations • 1 Apr 2022 • Andreas Triantafyllopoulos, Johannes Wagner, Hagen Wierstorf, Maximilian Schmitt, Uwe Reichel, Florian Eyben, Felix Burkhardt, Björn W. Schuller
Large, pre-trained neural networks consisting of self-attention layers (transformers) have recently achieved state-of-the-art results on several speech emotion recognition (SER) datasets.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +2
1 code implementation • 14 Mar 2022 • Johannes Wagner, Andreas Triantafyllopoulos, Hagen Wierstorf, Maximilian Schmitt, Felix Burkhardt, Florian Eyben, Björn W. Schuller
Recent advances in transformer-based architectures which are pre-trained in self-supervised manner have shown great promise in several machine learning tasks.
no code implementations • 13 Oct 2021 • Andreas Triantafyllopoulos, Uwe Reichel, Shuo Liu, Stephan Huber, Florian Eyben, Björn W. Schuller
In this contribution, we investigate the effectiveness of deep fusion of text and audio features for categorical and dimensional speech emotion recognition (SER).
no code implementations • 17 Dec 2020 • Katrin D. Bartl-Pokorny, Florian B. Pokorny, Anton Batliner, Shahin Amiriparian, Anastasia Semertzidou, Florian Eyben, Elena Kramer, Florian Schmidt, Rainer Schönweiler, Markus Wehler, Björn W. Schuller
Group differences in the front vowels /i:/ and /e:/ are additionally reflected in the variation of the fundamental frequency and the harmonics-to-noise ratio, group differences in back vowels /o:/ and /u:/ in statistics of the Mel-frequency cepstral coefficients and the spectral slope.
no code implementations • 30 Aug 2019 • Anton Batliner, Stefan Steidl, Florian Eyben, Björn Schuller
In this article, we study laughter found in child-robot interaction where it had not been prompted intentionally.
no code implementations • 3 May 2018 • Andreas Triantafyllopoulos, Hesam Sagha, Florian Eyben, Björn Schuller
This paper describes audEERING's submissions as well as additional evaluations for the One-Minute-Gradual (OMG) emotion recognition challenge.
no code implementations • 15 Dec 2014 • Felix Weninger, Björn Schuller, Florian Eyben, Martin Wöllmer, Gerhard Rigoll
Transcription of broadcast news is an interesting and challenging application for large-vocabulary continuous speech recognition (LVCSR).
no code implementations • LREC 2014 • Bj{\"o}rn Schuller, Felix Friedmann, Florian Eyben
The baseline results clearly show the feasibility of automatic estimation of heart rate from the human voice, in particular from sustained vowels.