Facial Action Unit Detection

21 papers with code • 3 benchmarks • 4 datasets

Facial action unit detection is the task of detecting action units from a video of a face - for example, lip tightening and cheek raising.

( Image credit: Self-supervised Representation Learning from Videos for Facial Action Unit Detection )

Libraries

Use these libraries to find Facial Action Unit Detection models and implementations

Most implemented papers

Multitask Emotion Recognition with Incomplete Labels

wtomin/Multitask-Emotion-Recognition-with-Incomplete-Labels 10 Feb 2020

We use the soft labels and the ground truth to train the student model.

Deep Region and Multi-Label Learning for Facial Action Unit Detection

zkl20061823/DRML CVPR 2016

Region learning (RL) and multi-label learning (ML) have recently attracted increasing attentions in the field of facial Action Unit (AU) detection.

Linear Disentangled Representation Learning for Facial Actions

eglxiang/icassp15_emotion 11 Jan 2017

Limited annotated data available for the recognition of facial expression and action units embarrasses the training of deep networks, which can learn disentangled invariant features.

Pre-training strategies and datasets for facial representation learning

1adrianb/unsupervised-face-representation 30 Mar 2021

Recent work on Deep Learning in the area of face analysis has focused on supervised learning for specific tasks of interest (e. g. face recognition, facial landmark localization etc.)

Learning Multi-dimensional Edge Feature-based AU Relation Graph for Facial Action Unit Recognition

cvi-szu/me-graphau 2 May 2022

While the relationship between a pair of AUs can be complex and unique, existing approaches fail to specifically and explicitly represent such cues for each pair of AUs in each facial display.

Multi-scale Promoted Self-adjusting Correlation Learning for Facial Action Unit Detection

yuankaishen2001/Self-adjusting-AU 15 Aug 2023

Anatomically, there are innumerable correlations between AUs, which contain rich information and are vital for AU detection.

GPT as Psychologist? Preliminary Evaluations for GPT-4V on Visual Affective Computing

envision-research/gpt4affectivity 9 Mar 2024

In conclusion, this paper provides valuable insights into the potential applications and challenges of MLLMs in human-centric computing.

Multi-View Dynamic Facial Action Unit Detection

BCV-Uniandes/AUNets 25 Apr 2017

We then move to the novel setup of the FERA 2017 Challenge, in which we propose a multi-view extension of our approach that operates by first predicting the viewpoint from which the video was taken, and then evaluating an ensemble of action unit detectors that were trained for that specific viewpoint.

Deep Adaptive Attention for Joint Facial Action Unit Detection and Face Alignment

ZhiwenShao/JAANet ECCV 2018

Facial action unit (AU) detection and face alignment are two highly correlated tasks since facial landmarks can provide precise AU locations to facilitate the extraction of meaningful local features for AU detection.

Unconstrained Facial Action Unit Detection via Latent Feature Domain

ZhiwenShao/ADLD 25 Mar 2019

Due to the combination of source AU-related information and target AU-free information, the latent feature domain with transferred source label can be learned by maximizing the target-domain AU detection performance.