Self-supervised learning of a facial attribute embedding from video

21 Aug 2018  ·  Olivia Wiles, A. Sophia Koepke, Andrew Zisserman ·

We propose a self-supervised framework for learning facial attributes by simply watching videos of a human face speaking, laughing, and moving over time. To perform this task, we introduce a network, Facial Attributes-Net (FAb-Net), that is trained to embed multiple frames from the same video face-track into a common low-dimensional space. With this approach, we make three contributions: first, we show that the network can leverage information from multiple source frames by predicting confidence/attention masks for each frame; second, we demonstrate that using a curriculum learning regime improves the learned embedding; finally, we demonstrate that the network learns a meaningful face embedding that encodes information about head pose, facial landmarks and facial expression, i.e. facial attributes, without having been supervised with any labelled data. We are comparable or superior to state-of-the-art self-supervised methods on these tasks and approach the performance of supervised methods.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Unsupervised Facial Landmark Detection 300W FAb-Net NME 5.71 # 2
Unsupervised Facial Landmark Detection MAFL FAB-Net NME 3.44 # 5

Methods


No methods listed for this paper. Add relevant methods here