Modulated Fusion using Transformer for Linguistic-Acoustic Emotion Recognition

This paper aims to bring a new lightweight yet powerful solution for the task of Emotion Recognition and Sentiment Analysis. Our motivation is to propose two architectures based on Transformers and modulation that combine the linguistic and acoustic inputs from a wide range of datasets to challenge, and sometimes surpass, the state-of-the-art in the field. To demonstrate the efficiency of our models, we carefully evaluate their performances on the IEMOCAP, MOSI, MOSEI and MELD dataset. The experiments can be directly replicated and the code is fully open for future researches.

PDF Abstract EMNLP (nlpbt) 2020 PDF EMNLP (nlpbt) 2020 Abstract

Results from the Paper


Ranked #6 on Multimodal Sentiment Analysis on CMU-MOSEI (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Multimodal Sentiment Analysis CMU-MOSEI Modulated-fusion transformer Accuracy 82.45 # 6

Methods


No methods listed for this paper. Add relevant methods here