Frame attention networks for facial expression recognition in videos

29 Jun 2019  ·  Debin Meng, Xiaojiang Peng, Kai Wang, Yu Qiao ·

The video-based facial expression recognition aims to classify a given video into several basic emotions. How to integrate facial features of individual frames is crucial for this task. In this paper, we propose the Frame Attention Networks (FAN), to automatically highlight some discriminative frames in an end-to-end framework. The network takes a video with a variable number of face images as its input and produces a fixed-dimension representation. The whole network is composed of two modules. The feature embedding module is a deep Convolutional Neural Network (CNN) which embeds face images into feature vectors. The frame attention module learns multiple attention weights which are used to adaptively aggregate the feature vectors to form a single discriminative video representation. We conduct extensive experiments on CK+ and AFEW8.0 datasets. Our proposed FAN shows superior performance compared to other CNN based methods and achieves state-of-the-art performance on CK+.

PDF Abstract

Results from the Paper


Ranked #3 on Facial Expression Recognition (FER) on CK+ (Accuracy (7 emotion) metric)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Facial Expression Recognition (FER) Acted Facial Expressions In The Wild (AFEW) resnet18 Accuracy(on validation set) 51.181% # 8
Facial Expression Recognition (FER) CK+ FAN Accuracy (7 emotion) 99.7 # 3

Methods


No methods listed for this paper. Add relevant methods here