Robust Lightweight Facial Expression Recognition Network with Label Distribution Training

This paper presents an efficiently robust facial expression recognition (FER) network, named EfficientFace, which holds much fewer parameters but more robust to the FER in the wild. Firstly, to improve the robustness of the lightweight network, a local-feature extractor and a channel-spatial modulator are designed, in which the depthwise convolution is employed. As a result, the network is aware of local and global-salient facial features. Then, considering the fact that most emotions occur as combinations, mixtures, or compounds of the basic emotions, we introduce a simple but efficient label distribution learning (LDL) method as a novel training strategy. Experiments conducted on realistic occlusion and pose variation datasets demonstrate that the proposed EfficientFace is robust under occlusion and pose variation conditions. Moreover, the proposed method achieves state-of-the-art results on RAF-DB, CAER-S, and AffectNet-7 datasets with accuracies of 88.36%, 85.87%, and 63.70%, respectively, and a comparable result on the AffectNet-8 dataset with an accuracy of 59.89%.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Facial Expression Recognition (FER) AffectNet EfficientFace Accuracy (7 emotion) 63.70 # 23
Accuracy (8 emotion) 59.89 # 23
Facial Expression Recognition (FER) CAER EfficientFace Accuracy 85.87 # 1
Facial Expression Recognition (FER) RAF-DB EfficientFace Overall Accuracy 88.36 # 18

Methods


No methods listed for this paper. Add relevant methods here