Search Results for author: Osama Zeeshan

Found 2 papers, 2 papers with code

Joint Multimodal Transformer for Emotion Recognition in the Wild

1 code implementation15 Mar 2024 Paul Waligora, Haseeb Aslam, Osama Zeeshan, Soufiane Belharbi, Alessandro Lameiras Koerich, Marco Pedersoli, Simon Bacon, Eric Granger

Multimodal emotion recognition (MMER) systems typically outperform unimodal systems by leveraging the inter- and intra-modal relationships between, e. g., visual, textual, physiological, and auditory modalities.

Multimodal Emotion Recognition

A Joint Cross-Attention Model for Audio-Visual Fusion in Dimensional Emotion Recognition

1 code implementation28 Mar 2022 Gnana Praveen Rajasekar, Wheidima Carneiro de Melo, Nasib Ullah, Haseeb Aslam, Osama Zeeshan, Théo Denorme, Marco Pedersoli, Alessandro Koerich, Simon Bacon, Patrick Cardinal, Eric Granger

Specifically, we propose a joint cross-attention model that relies on the complementary relationships to extract the salient features across A-V modalities, allowing for accurate prediction of continuous values of valence and arousal.

Multimodal Emotion Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.