Facial Action Unit Detection
21 papers with code • 3 benchmarks • 4 datasets
Facial action unit detection is the task of detecting action units from a video of a face - for example, lip tightening and cheek raising.
( Image credit: Self-supervised Representation Learning from Videos for Facial Action Unit Detection )
Libraries
Use these libraries to find Facial Action Unit Detection models and implementationsMost implemented papers
Multitask Emotion Recognition with Incomplete Labels
We use the soft labels and the ground truth to train the student model.
Deep Region and Multi-Label Learning for Facial Action Unit Detection
Region learning (RL) and multi-label learning (ML) have recently attracted increasing attentions in the field of facial Action Unit (AU) detection.
Linear Disentangled Representation Learning for Facial Actions
Limited annotated data available for the recognition of facial expression and action units embarrasses the training of deep networks, which can learn disentangled invariant features.
Pre-training strategies and datasets for facial representation learning
Recent work on Deep Learning in the area of face analysis has focused on supervised learning for specific tasks of interest (e. g. face recognition, facial landmark localization etc.)
Learning Multi-dimensional Edge Feature-based AU Relation Graph for Facial Action Unit Recognition
While the relationship between a pair of AUs can be complex and unique, existing approaches fail to specifically and explicitly represent such cues for each pair of AUs in each facial display.
Multi-scale Promoted Self-adjusting Correlation Learning for Facial Action Unit Detection
Anatomically, there are innumerable correlations between AUs, which contain rich information and are vital for AU detection.
GPT as Psychologist? Preliminary Evaluations for GPT-4V on Visual Affective Computing
In conclusion, this paper provides valuable insights into the potential applications and challenges of MLLMs in human-centric computing.
Multi-View Dynamic Facial Action Unit Detection
We then move to the novel setup of the FERA 2017 Challenge, in which we propose a multi-view extension of our approach that operates by first predicting the viewpoint from which the video was taken, and then evaluating an ensemble of action unit detectors that were trained for that specific viewpoint.
Deep Adaptive Attention for Joint Facial Action Unit Detection and Face Alignment
Facial action unit (AU) detection and face alignment are two highly correlated tasks since facial landmarks can provide precise AU locations to facilitate the extraction of meaningful local features for AU detection.
Unconstrained Facial Action Unit Detection via Latent Feature Domain
Due to the combination of source AU-related information and target AU-free information, the latent feature domain with transferred source label can be learned by maximizing the target-domain AU detection performance.