Facial Expression Recognition (FER)
122 papers with code • 25 benchmarks • 29 datasets
Facial Expression Recognition (FER) is a computer vision task aimed at identifying and categorizing emotional expressions depicted on a human face. The goal is to automate the process of determining emotions in real-time, by analyzing the various features of a face such as eyebrows, eyes, mouth, and other features, and mapping them to a set of emotions such as anger, fear, surprise, sadness and happiness.
( Image credit: DeXpression )
Libraries
Use these libraries to find Facial Expression Recognition (FER) models and implementationsSubtasks
Latest papers
eMotion-GAN: A Motion-based GAN for Photorealistic and Facial Expression Preserving Frontal View Synthesis
Considering the motion induced by head variation as noise and the motion induced by facial expression as the relevant information, our model is trained to filter out the noisy motion in order to retain only the motion related to facial expression.
A Lightweight Attention-based Deep Network via Multi-Scale Feature Fusion for Multi-View Facial Expression Recognition
On the other hand, the PWFS block employs a feature selection mechanism that discards less meaningful features prior to the fusion process.
Guided Interpretable Facial Expression Recognition via Spatial Action Unit Cues
In particular, using this aus codebook, input image expression label, and facial landmarks, a single action units heatmap is built to indicate the most discriminative regions of interest in the image w. r. t the facial expression.
Expression-aware video inpainting for HMD removal in XR applications
Our results demonstrate the remarkable capability of the proposed framework to remove HMDs from facial videos while maintaining the subject's facial expression and identity.
From Static to Dynamic: Adapting Landmark-Aware Image Models for Facial Expression Recognition in Videos
And the TMAs capture and model the relationships of dynamic changes in facial expressions, effectively extending the pre-trained image model for videos.
Subject-Based Domain Adaptation for Facial Expression Recognition
However, previous methods for MSDA adapt image classification models across datasets and do not scale well to a larger number of source domains.
QAFE-Net: Quality Assessment of Facial Expressions with Landmark Heatmaps
Beyond FER, pain estimation methods assess levels of intensity in pain expressions, however assessing the quality of all facial expressions is of critical value in health-related applications.
EmoCLIP: A Vision-Language Method for Zero-Shot Video Facial Expression Recognition
To test this, we evaluate using zero-shot classification of the model trained on sample-level descriptions on four popular dynamic FER datasets.
EmoNeXt: an Adapted ConvNeXt for Facial Emotion Recognition
Facial expressions play a crucial role in human communication serving as a powerful and impactful means to express a wide range of emotions.
A Dual-Direction Attention Mixed Feature Network for Facial Expression Recognition
In recent years, facial expression recognition (FER) has garnered significant attention within the realm of computer vision research.