Learning Vision Transformer with Squeeze and Excitation for Facial Expression Recognition

7 Jul 2021  ·  Mouath Aouayeb, Wassim Hamidouche, Catherine Soladie, Kidiyo Kpalma, Renaud Seguier ·

As various databases of facial expressions have been made accessible over the last few decades, the Facial Expression Recognition (FER) task has gotten a lot of interest. The multiple sources of the available databases raised several challenges for facial recognition task. These challenges are usually addressed by Convolution Neural Network (CNN) architectures. Different from CNN models, a Transformer model based on attention mechanism has been presented recently to address vision tasks. One of the major issue with Transformers is the need of a large data for training, while most FER databases are limited compared to other vision applications. Therefore, we propose in this paper to learn a vision Transformer jointly with a Squeeze and Excitation (SE) block for FER task. The proposed method is evaluated on different publicly available FER databases including CK+, JAFFE,RAF-DB and SFEW. Experiments demonstrate that our model outperforms state-of-the-art methods on CK+ and SFEW and achieves competitive results on JAFFE and RAF-DB.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Facial Expression Recognition (FER) CK+ ViT + SE Accuracy (7 emotion) 99.8 # 2
Facial Expression Recognition (FER) JAFFE ViT Accuracy 94.83 # 3
Facial Expression Recognition (FER) RaFD ViT + SE Accuracy 87.22 # 1
Facial Expression Recognition (FER) SFEW ViT + SE Accuracy 54.29 # 2