Video Emotion Recognition
2 papers with code • 0 benchmarks • 2 datasets
These leaderboards are used to track progress in Video Emotion Recognition
Emotion recognition in user-generated videos plays an important role in human-centered computing.
This paper presents a novel deep neural network (DNN) for multimodal fusion of audio, video and text modalities for emotion recognition.
Although there is no consensus on a definition, human emotional states usually can be apperceived by auditory and visual systems.
Emotion recognition can provide crucial information about the user in many applications when building human-computer interaction (HCI) systems.
Performance of the proposed model shows that on average it is easier to distinguish among genuine facial expressions of emotion than among unfelt facial expressions of emotion and that certain emotion pairs such as contempt and disgust are more difficult to distinguish than the rest.
Our experiment adapts several popular deep learning methods as well as some traditional methods on the problem of video emotion recognition.
Modelling Temporal Information Using Discrete Fourier Transform for Recognizing Emotions in User-generated Videos
By this way, static image features extracted from a pre-trained deep CNN and temporal information represented by DFT features are jointly considered for video emotion recognition.