Paper

Fusing Body Posture with Facial Expressions for Joint Recognition of Affect in Child-Robot Interaction

In this paper we address the problem of multi-cue affect recognition in challenging scenarios such as child-robot interaction. Towards this goal we propose a method for automatic recognition of affect that leverages body expressions alongside facial ones, as opposed to traditional methods that typically focus only on the latter. Our deep-learning based method uses hierarchical multi-label annotations and multi-stage losses, can be trained both jointly and separately, and offers us computational models for both individual modalities, as well as for the whole body emotion. We evaluate our method on a challenging child-robot interaction database of emotional expressions collected by us, as well as on the GEMEP public database of acted emotions by adults, and show that the proposed method achieves significantly better results than facial-only expression baselines.

Results in Papers With Code
(↓ scroll down to see all results)