Search Results for author: Jiaxin Ye

Found 4 papers, 4 papers with code

emotion2vec: Self-Supervised Pre-Training for Speech Emotion Representation

2 code implementations23 Dec 2023 Ziyang Ma, Zhisheng Zheng, Jiaxin Ye, Jinchao Li, Zhifu Gao, Shiliang Zhang, Xie Chen

To the best of our knowledge, emotion2vec is the first universal representation model in various emotion-related tasks, filling a gap in the field.

Self-Supervised Learning Sentiment Analysis +1

Temporal Modeling Matters: A Novel Temporal Emotional Modeling Approach for Speech Emotion Recognition

1 code implementation14 Nov 2022 Jiaxin Ye, Xin-Cheng Wen, Yujie Wei, Yong Xu, KunHong Liu, Hongming Shan

Specifically, TIM-Net first employs temporal-aware blocks to learn temporal affective representation, then integrates complementary information from the past and the future to enrich contextual representations, and finally, fuses multiple time scale features for better adaptation to the emotional variation.

Speech Emotion Recognition

Online Prototype Learning for Online Continual Learning

1 code implementation ICCV 2023 Yujie Wei, Jiaxin Ye, Zhizhong Huang, Junping Zhang, Hongming Shan

Online continual learning (CL) studies the problem of learning continuously from a single-pass data stream while adapting to new data and mitigating catastrophic forgetting.

Continual Learning Knowledge Distillation

Emo-DNA: Emotion Decoupling and Alignment Learning for Cross-Corpus Speech Emotion Recognition

1 code implementation4 Aug 2023 Jiaxin Ye, Yujie Wei, Xin-Cheng Wen, Chenglong Ma, Zhizhong Huang, KunHong Liu, Hongming Shan

On one hand, our contrastive emotion decoupling achieves decoupling learning via a contrastive decoupling loss to strengthen the separability of emotion-relevant features from corpus-specific ones.

Cross-corpus Speech Emotion Recognition +1

Cannot find the paper you are looking for? You can Submit a new open access paper.