Deep Multimodal Learning for Emotion Recognition in Spoken Language

22 Feb 2018Yue GuShuhong ChenIvan Marsic

In this paper, we present a novel deep multimodal framework to predict human emotions based on sentence-level spoken language. Our architecture has two distinctive characteristics... (read more)

PDF Abstract

Code


No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.