Action Unit Detection
14 papers with code • 1 benchmarks • 3 datasets
Action unit detection is the task of detecting action units from a video - for example, types of facial action units (lip tightening, cheek raising) from a video of a face.
( Image credit: AU R-CNN )
Libraries
Use these libraries to find Action Unit Detection models and implementationsLatest papers
Learning Contrastive Feature Representations for Facial Action Unit Detection
To address the challenge posed by noisy AU labels, we augment the supervised signal through the introduction of a self-supervised signal.
FG-Net: Facial Action Unit Detection with Generalizable Pyramidal Features
The proposed FG-Net achieves a strong generalization ability for heatmap-based AU detection thanks to the generalizable and semantic-rich features extracted from the pre-trained generative model.
Multi-scale Promoted Self-adjusting Correlation Learning for Facial Action Unit Detection
Anatomically, there are innumerable correlations between AUs, which contain rich information and are vital for AU detection.
EmotiEffNet Facial Features in Uni-task Emotion Recognition in Video at ABAW-5 competition
In this article, the results of our team for the fifth Affective Behavior Analysis in-the-wild (ABAW) competition are presented.
Video-Based Frame-Level Facial Analysis of Affective Behavior on Mobile Devices Using EfficientNets
In this paper, we consider the problem of real-time video-based facial emotion analytics, namely, facial expression recognition, prediction of valence and arousal and detection of action unit points.
J$\hat{\text{A}}$A-Net: Joint Facial Action Unit Detection and Face Alignment via Adaptive Attention
Moreover, to extract precise local features, we propose an adaptive attention learning module to refine the attention map of each AU adaptively.
Multitask Emotion Recognition with Incomplete Labels
We use the soft labels and the ground truth to train the student model.
Self-Supervised Representation Learning From Videos for Facial Action Unit Detection
In this paper, we aim to learn discriminative representation for facial action unit (AU) detection from large amount of videos without manual annotations.
Unconstrained Facial Action Unit Detection via Latent Feature Domain
Due to the combination of source AU-related information and target AU-free information, the latent feature domain with transferred source label can be learned by maximizing the target-domain AU detection performance.
AU R-CNN: Encoding Expert Prior Knowledge into R-CNN for Action Unit Detection
(2) We integrate various dynamic models (including convolutional long short-term memory, two stream network, conditional random field, and temporal action localization network) into AU R-CNN and then investigate and analyze the reason behind the performance of dynamic models.