Search Results for author: Ha Thi Phuong Thao

Found 3 papers, 3 papers with code

AttendAffectNet–Emotion Prediction of Movie Viewers Using Multimodal Fusion with Self-Attention

1 code implementation Sensors 2021 Ha Thi Phuong Thao, B T Balamurali, Gemma Roig, Dorien Herremans

The models that use all visual, audio, and text features simultaneously as their inputs performed better than those using features extracted from each modality separately.

Representation Learning

AttendAffectNet: Self-Attention based Networks for Predicting Affective Responses from Movies

1 code implementation21 Oct 2020 Ha Thi Phuong Thao, Balamurali B. T., Dorien Herremans, Gemma Roig

In this work, we propose different variants of the self-attention based network for emotion prediction from movies, which we call AttendAffectNet.

Relation

Multimodal Deep Models for Predicting Affective Responses Evoked by Movies

1 code implementation16 Sep 2019 Ha Thi Phuong Thao, Dorien Herremans, Gemma Roig

Interestingly, we also observe that the optical flow is more informative than the RGB in videos, and overall, models using audio features are more accurate than those based on video features when making the final prediction of evoked emotions.

Optical Flow Estimation

Cannot find the paper you are looking for? You can Submit a new open access paper.