Action unit detection is the task of detecting action units from a video - for example, types of facial action units (lip tightening, cheek raising) from a video of a face.
( Image credit: AU R-CNN )
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
We then move to the novel setup of the FERA 2017 Challenge, in which we propose a multi-view extension of our approach that operates by first predicting the viewpoint from which the video was taken, and then evaluating an ensemble of action unit detectors that were trained for that specific viewpoint.
Ranked #1 on Facial Action Unit Detection on BP4D
In this paper, we aim to learn discriminative representation for facial action unit (AU) detection from large amount of videos without manual annotations.
Facial action unit (AU) detection and face alignment are two highly correlated tasks since facial landmarks can provide precise AU locations to facilitate the extraction of meaningful local features for AU detection.
(2) We integrate various dynamic models (including convolutional long short-term memory, two stream network, conditional random field, and temporal action localization network) into AU R-CNN and then investigate and analyze the reason behind the performance of dynamic models.
Ranked #1 on Action Unit Detection on BP4D
Moreover, to extract precise local features, we propose an adaptive attention learning module to refine the attention map of each AU adaptively.
Due to the combination of source AU-related information and target AU-free information, the latent feature domain with the transferred source AU label can be learned by maximizing the target-domain AU detection performance.