Facial Emotion Recognition
26 papers with code • 2 benchmarks • 6 datasets
Emotion Recognition from facial images
Most implemented papers
Facial Emotion Recognition Using Transfer Learning in the Deep CNN
Human facial emotion recognition (FER) has attracted the attention of the research community for its promising applications.
Facial Emotion Recognition: State of the Art Performance on FER2013
Facial emotion recognition (FER) is significant for human-computer interaction such as clinical practice and behavioral description.
Self-attention fusion for audiovisual emotion recognition with incomplete data
In this paper, we consider the problem of multimodal data analysis with a use case of audiovisual emotion recognition.
TRANSFER :- DEEP INDUCTIVE NETWORK FOR FACIAL EMOTION RECOGNITION
Deep learning using transfer learning has shown promising results in computer vision in solving the problem of lack of labeled data.
Facial Emotion Recognition with Noisy Multi-task Annotations
In our formulation, we exploit a new method to enable the emotion prediction and the joint distribution learning in a unified adversarial learning game.
Convolutional Neural Network Hyperparameters optimization for Facial Emotion Recognition
This paper presents a method of optimizing the hyperparameters of a convolutional neural network in order to increase accuracy in the context of facial emotion recognition.
Facial Emotion Recognition: A multi-task approach using deep learning
Facial Emotion Recognition is an inherently difficult problem, due to vast differences in facial structures of individuals and ambiguity in the emotion displayed by a person.
Evaluation of Interpretability for Deep Learning algorithms in EEG Emotion Recognition: A case study in Autism
This study is the first to consolidate a more transparent feature-relevance calculation for a successful EEG-based facial emotion recognition using a within-subject-trained CNN in typically-developed and ASD individuals.
A proposal for Multimodal Emotion Recognition using aural transformers and Action Units on RAVDESS dataset
Regarding the facial emotion recognizer, we extracted the Action Units of the videos and compared the performance between employing static models against sequential models.
A novel facial emotion recognition model using segmentation VGG-19 architecture
CNN has shown great potential in FER tasks due to its unique feature extraction strategy compared to regular FER models.