Facial expression recognition is the task of classifying the expressions on face images into various categories such as anger, fear, surprise, sadness, happiness and so on.
( Image credit: DeXpression )
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
To enhance the robustness of the learned models for various scenarios, we propose to perform omni-supervised learning by exploiting the labeled samples together with a large number of unlabeled data.
The Deep Neural Networks (DNNs) models have contributed a high accuracy for the classification of human emotional states from facial expression recognition data sets, where efficiency is an important factor for resource-limited systems as mobile devices and embedded systems.
Human emotions analysis has been the focus of many studies, especially in the field of Affective Computing, and is important for many applications, e. g. human-computer intelligent interaction, stress analysis, interactive games, animations, etc.
Recognizing the expressions of partially occluded faces is a challenging computer vision problem.
Different from many other attributes, facial expression can change in a continuous way, and therefore, a slight semantic change of input should also lead to the output fluctuation limited in a small scale.
This paper introduces a new form of real-time affective interface that engages the user in a process of conceptualisation of their emotional state.
We conducted comprehensive experiments on the categorical and dimensional models of affect on the challenging in-the-wild databases of AffectNet, FER2013, and Affect-in-Wild.
Current state-of-the-art models for automatic FER are based on very deep neural networks that are difficult to train.