Emotion Classification
94 papers with code • 10 benchmarks • 27 datasets
Emotion classification, or emotion categorization, is the task of recognising emotions to classify them into the corresponding category. Given an input, classify it as 'neutral or no emotion' or as one, or more, of several given emotions that best represent the mental state of the subject's facial expression, words, and so on. Some example benchmarks include ROCStories, Many Faces of Anger (MFA), and GoEmotions. Models can be evaluated using metrics such as the Concordance Correlation Coefficient (CCC) and the Mean Squared Error (MSE).
Libraries
Use these libraries to find Emotion Classification models and implementationsDatasets
Latest papers
KAM -- a Kernel Attention Module for Emotion Classification with EEG Data
In this work, a kernel attention module is presented for the task of EEG-based emotion classification with neural networks.
A Monotonicity Constrained Attention Module for Emotion Classification with Limited EEG Data
In this work, a parameter-efficient attention module is presented for emotion classification using a limited, or relatively small, number of electroencephalogram (EEG) signals.
Dilated Context Integrated Network with Cross-Modal Consensus for Temporal Emotion Localization in Videos
In this paper, we introduce a new task, named Temporal Emotion Localization in videos~(TEL), which aims to detect human emotions and localize their corresponding temporal boundaries in untrimmed videos with aligned subtitles.
ArmanEmo: A Persian Dataset for Text-based Emotion Detection
With the recent proliferation of open textual data on social media platforms, Emotion Detection (ED) from Text has received more attention over the past years.
BYEL : Bootstrap Your Emotion Latent
With the improved performance of deep learning, the number of studies trying to apply deep learning to human emotion analysis is increasing rapidly.
GraphCFC: A Directed Graph Based Cross-Modal Feature Complementation Approach for Multimodal Conversational Emotion Recognition
In multimodal ERC, GNNs are capable of extracting both long-distance contextual information and inter-modal interactive information.
Accurate Emotion Strength Assessment for Seen and Unseen Speech Based on Data-Driven Deep Learning
In this paper, we propose a data-driven deep learning model, i. e. StrengthNet, to improve the generalization of emotion strength assessment for seen and unseen speech.
None Class Ranking Loss for Document-Level Relation Extraction
This ignores the context of entity pairs and the label correlations between the none class and pre-defined classes, leading to sub-optimal predictions.
Exploiting Multiple EEG Data Domains with Adversarial Learning
Electroencephalography (EEG) is shown to be a valuable data source for evaluating subjects' mental states.
Is Cross-Attention Preferable to Self-Attention for Multi-Modal Emotion Recognition?
Generally, models that fuse complementary information from multiple modalities outperform their uni-modal counterparts.