MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversations

ACL 2019 Soujanya PoriaDevamanyu HazarikaNavonil MajumderGautam NaikErik CambriaRada Mihalcea

Emotion recognition in conversations is a challenging task that has recently gained popularity due to its potential applications. Until now, however, a large-scale multimodal multi-party emotional conversational database containing more than two speakers per dialogue was missing... (read more)

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.