MELD (Multimodal EmotionLines Dataset)

Introduced by Poria et al. in MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversations

Multimodal EmotionLines Dataset (MELD) has been created by enhancing and extending EmotionLines dataset. MELD contains the same dialogue instances available in EmotionLines, but it also encompasses audio and visual modality along with text. MELD has more than 1400 dialogues and 13000 utterances from Friends TV series. Multiple speakers participated in the dialogues. Each utterance in a dialogue has been labeled by any of these seven emotions -- Anger, Disgust, Sadness, Joy, Neutral, Surprise and Fear. MELD also has sentiment (positive, negative and neutral) annotation for each utterance.

Source: https://affective-meld.github.io/

Papers


Paper Code Results Date

Tasks


Similar Datasets


License


  • Unknown

Modalities


Languages