Action Unit Detection

11 papers with code • 1 benchmarks • 3 datasets

Action unit detection is the task of detecting action units from a video - for example, types of facial action units (lip tightening, cheek raising) from a video of a face.

( Image credit: AU R-CNN )


Use these libraries to find Action Unit Detection models and implementations

Most implemented papers

Multitask Emotion Recognition with Incomplete Labels

wtomin/Multitask-Emotion-Recognition-with-Incomplete-Labels 10 Feb 2020

We use the soft labels and the ground truth to train the student model.

Deep Region and Multi-Label Learning for Facial Action Unit Detection

zkl20061823/DRML CVPR 2016

Region learning (RL) and multi-label learning (ML) have recently attracted increasing attentions in the field of facial Action Unit (AU) detection.

A Compact Embedding for Facial Expression Similarity

AmirSh15/FECNet CVPR 2019

Most of the existing work on automatic facial expression analysis focuses on discrete emotion recognition, or facial action unit detection.

AU R-CNN: Encoding Expert Prior Knowledge into R-CNN for Action Unit Detection

sharpstill/AU_R-CNN 14 Dec 2018

(2) We integrate various dynamic models (including convolutional long short-term memory, two stream network, conditional random field, and temporal action localization network) into AU R-CNN and then investigate and analyze the reason behind the performance of dynamic models.

Video-Based Frame-Level Facial Analysis of Affective Behavior on Mobile Devices Using EfficientNets

HSE-asavchenko/face-emotion-recognition CVPR Workshop 2022

In this paper, we consider the problem of real-time video-based facial emotion analytics, namely, facial expression recognition, prediction of valence and arousal and detection of action unit points.

Multi-View Dynamic Facial Action Unit Detection

BCV-Uniandes/AUNets 25 Apr 2017

We then move to the novel setup of the FERA 2017 Challenge, in which we propose a multi-view extension of our approach that operates by first predicting the viewpoint from which the video was taken, and then evaluating an ensemble of action unit detectors that were trained for that specific viewpoint.

Deep Adaptive Attention for Joint Facial Action Unit Detection and Face Alignment

ZhiwenShao/JAANet ECCV 2018

Facial action unit (AU) detection and face alignment are two highly correlated tasks since facial landmarks can provide precise AU locations to facilitate the extraction of meaningful local features for AU detection.

Unconstrained Facial Action Unit Detection via Latent Feature Domain

ZhiwenShao/ADLD 25 Mar 2019

Due to the combination of source AU-related information and target AU-free information, the latent feature domain with transferred source label can be learned by maximizing the target-domain AU detection performance.

Self-Supervised Representation Learning From Videos for Facial Action Unit Detection

mysee1989/TCAE CVPR 2019

In this paper, we aim to learn discriminative representation for facial action unit (AU) detection from large amount of videos without manual annotations.

J$\hat{\text{A}}$A-Net: Joint Facial Action Unit Detection and Face Alignment via Adaptive Attention

ZhiwenShao/PyTorch-JAANet 18 Mar 2020

Moreover, to extract precise local features, we propose an adaptive attention learning module to refine the attention map of each AU adaptively.