Weighted Feature Fusion Based Emotional Recognition for Variable-length Speech using DNN

1 Jan 2019  ·  Sifan Wu1, Fei Li1, Pengyuan Zhang ·

Emotion recognition plays an increasingly important role in human-computer interaction systems, which is a key technology in multimedia communication. Because neural networks can automatically learn the intermediate representation of raw speech signal, currently, most methods use Convolutional Neural Network (CNN) to extract information directly from spectrograms, but this may result in the ineffective use of information in hand-crafted features. In this work, a model based on weighted feature fusion method is proposed for emotion recognition of variable-length speech. Since the Chroma-based features are closely related to speech emotions, our model can effectively utilize the useful information in Chromaticity map to improve the performance by combining CNN-based features and Chroma-based features. We evaluated the model on the Interactive Emotional Motion Capture (IEMOCAP) dataset and achieved more than 5% increase in weighted accuracy (WA) and unweighted accuracy (UA), comparing with the existing state-ofthe-art methods

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here