M3ER: Multiplicative Multimodal Emotion Recognition Using Facial, Textual, and Speech Cues

9 Nov 2019Trisha MittalUttaran BhattacharyaRohan ChandraAniket BeraDinesh Manocha

We present M3ER, a learning-based method for emotion recognition from multiple input modalities. Our approach combines cues from multiple co-occurring modalities (such as face, text, and speech) and also is more robust than other methods to sensor noise in any of the individual modalities... (read more)

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.