DAiSEE: Towards User Engagement Recognition in the Wild

7 Sep 2016  ·  Abhay Gupta, Arjun D'Cunha, Kamal Awasthi, Vineeth Balasubramanian ·

We introduce DAiSEE, the first multi-label video classification dataset comprising of 9068 video snippets captured from 112 users for recognizing the user affective states of boredom, confusion, engagement, and frustration in the wild. The dataset has four levels of labels namely - very low, low, high, and very high for each of the affective states, which are crowd annotated and correlated with a gold standard annotation created using a team of expert psychologists. We have also established benchmark results on this dataset using state-of-the-art video classification methods that are available today. We believe that DAiSEE will provide the research community with challenges in feature extraction, context-based inference, and development of suitable machine learning methods for related tasks, thus providing a springboard for further research. The dataset is available for download at https://people.iith.ac.in/vineethnb/resources/daisee/index.html.

PDF Abstract

Datasets


Introduced in the Paper:

DAiSEE

Used in the Paper:

ImageNet AffectNet CK+ FER2013 DISFA MMI

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here