AffectNet: A Database for Facial Expression, Valence, and Arousal Computing in the Wild

14 Aug 2017  ·  Ali Mollahosseini, Behzad Hasani, Mohammad H. Mahoor ·

Automated affective computing in the wild setting is a challenging problem in computer vision. Existing annotated databases of facial expressions in the wild are small and mostly cover discrete emotions (aka the categorical model). There are very limited annotated facial databases for affective computing in the continuous dimensional model (e.g., valence and arousal). To meet this need, we collected, annotated, and prepared for public distribution a new database of facial emotions in the wild (called AffectNet). AffectNet contains more than 1,000,000 facial images from the Internet by querying three major search engines using 1250 emotion related keywords in six different languages. About half of the retrieved images were manually annotated for the presence of seven discrete facial expressions and the intensity of valence and arousal. AffectNet is by far the largest database of facial expression, valence, and arousal in the wild enabling research in automated facial expression recognition in two different emotion models. Two baseline deep neural networks are used to classify images in the categorical model and predict the intensity of valence and arousal. Various evaluation metrics show that our deep neural network baselines can perform better than conventional machine learning methods and off-the-shelf facial expression recognition systems.

PDF Abstract

Datasets


Introduced in the Paper:

AffectNet

Used in the Paper:

CK+ FER2013 DISFA MMI
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Facial Expression Recognition (FER) AffectNet Weighted-Loss Accuracy (7 emotion) - # 23
Accuracy (8 emotion) 58.0 # 23

Methods


No methods listed for this paper. Add relevant methods here