Facial action unit detection is the task of detecting action units from a video of a face - for example, lip tightening and cheek raising.
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
In this paper, we aim to learn discriminative representation for facial action unit (AU) detection from large amount of videos without manual annotations.
We then move to the novel setup of the FERA 2017 Challenge, in which we propose a multi-view extension of our approach that operates by first predicting the viewpoint from which the video was taken, and then evaluating an ensemble of action unit detectors that were trained for that specific viewpoint.
Ranked #1 on Facial Action Unit Detection on BP4D
Facial action unit (AU) detection and face alignment are two highly correlated tasks since facial landmarks can provide precise AU locations to facilitate the extraction of meaningful local features for AU detection.
Moreover, to extract precise local features, we propose an adaptive attention learning module to refine the attention map of each AU adaptively.
In this work, we propose a semi-supervised approach for AU recognition utilizing a large number of web face images without AU labels and a relatively small face dataset with AU annotations inspired by the co-training methods.
Limited annotated data available for the recognition of facial expression and action units embarrasses the training of deep networks, which can learn disentangled invariant features.