Search Results for author: Junghun Kim

Found 3 papers, 0 papers with code

CMSBERT-CLR: Context-driven Modality Shifting BERT with Contrastive Learning for linguistic, visual, acoustic Representations

no code implementations21 Aug 2022 Junghun Kim, Jihie Kim

In this paper, we present a Context-driven Modality Shifting BERT with Contrastive Learning for linguistic, visual, acoustic Representations (CMSBERT-CLR), which incorporates the whole context's non-verbal and verbal information and aligns modalities more effectively through contrastive learning.

Contrastive Learning Multimodal Sentiment Analysis

Improving Speech Emotion Recognition Through Focus and Calibration Attention Mechanisms

no code implementations21 Aug 2022 Junghun Kim, Yoojin An, Jihie Kim

To improve the attention area, we propose to use a Focus-Attention (FA) mechanism and a novel Calibration-Attention (CA) mechanism in combination with the multi-head self-attention.

Speech Emotion Recognition

Representation Learning with Graph Neural Networks for Speech Emotion Recognition

no code implementations21 Aug 2022 Junghun Kim, Jihie Kim

In particular, we propose a cosine similarity-based graph as an ideal graph structure for representation learning in SER.

Representation Learning Speech Emotion Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.