Multimodal Sentiment Analysis

74 papers with code • 5 benchmarks • 7 datasets

Multimodal sentiment analysis is the task of performing sentiment analysis with multiple data sources - e.g. a camera feed of someone's face and their recorded speech.

( Image credit: ICON: Interactive Conversational Memory Network for Multimodal Emotion Detection )

Libraries

Use these libraries to find Multimodal Sentiment Analysis models and implementations
3 papers
580

Latest papers with no code

Correlation-Decoupled Knowledge Distillation for Multimodal Sentiment Analysis with Incomplete Modalities

no code yet • 25 Apr 2024

Specifically, we present a sample-level contrastive distillation mechanism that transfers comprehensive knowledge containing cross-sample correlations to reconstruct missing semantics.

Trustworthy Multimodal Fusion for Sentiment Analysis in Ordinal Sentiment Space

no code yet • 13 Apr 2024

To address the aforementioned problems, we propose a trustworthy multimodal sentiment ordinal network (TMSON) to improve performance in sentiment analysis.

TCAN: Text-oriented Cross Attention Network for Multimodal Sentiment Analysis

no code yet • 6 Apr 2024

Motivated by these insights, we introduce a Text-oriented Cross-Attention Network (TCAN), emphasizing the predominant role of the text modality in MSA.

Towards Multimodal Sentiment Analysis Debiasing via Bias Purification

no code yet • 8 Mar 2024

In the inference phase, given a factual multimodal input, MCIS imagines two counterfactual scenarios to purify and mitigate these biases.

Emoji Driven Crypto Assets Market Reactions

no code yet • 16 Feb 2024

In the burgeoning realm of cryptocurrency, social media platforms like Twitter have become pivotal in influencing market trends and investor sentiments.

Sentiment-enhanced Graph-based Sarcasm Explanation in Dialogue

no code yet • 6 Feb 2024

Although existing studies have achieved great success based on the generative pretrained language model BART, they overlook exploiting the sentiments residing in the utterance, video and audio, which are vital clues for sarcasm explanation.

Toward Robust Multimodal Learning using Multimodal Foundational Models

no code yet • 20 Jan 2024

Recently, CLIP-based multimodal foundational models have demonstrated impressive performance on numerous multimodal tasks by learning the aligned cross-modal semantics of image and text pairs, but the multimodal foundational models are also unable to directly address scenarios involving modality absence.

WisdoM: Improving Multimodal Sentiment Analysis by Fusing Contextual World Knowledge

no code yet • 12 Jan 2024

Sentiment analysis is rapidly advancing by utilizing various data modalities (e. g., text, image).

Multimodal Sentiment Analysis with Missing Modality: A Knowledge-Transfer Approach

no code yet • 28 Dec 2023

Multimodal sentiment analysis aims to identify the emotions expressed by individuals through visual, language, and acoustic cues.

Explainable Multimodal Sentiment Analysis on Bengali Memes

no code yet • 20 Dec 2023

Memes have become a distinctive and effective form of communication in the digital era, attracting online communities and cutting across cultural barriers.