1 code implementation • 9 Oct 2023 • Haoyu Zhang, Yu Wang, Guanghao Yin, Kejun Liu, Yuanyuan Liu, Tianshu Yu
Though Multimodal Sentiment Analysis (MSA) proves effective by utilizing rich information from multiple sources (e. g., language, video, and audio), the potential sentiment-irrelevant and conflicting information across modalities may hinder the performance from being further improved.
Ranked #1 on Multimodal Sentiment Analysis on CMU-MOSI (Acc-5 metric)
no code implementations • 4 May 2023 • Yuanyuan Liu, Haoyu Zhang, Yibing Zhan, Zijing Chen, Guanghao Yin, Lin Wei, Zhe Chen
To this end, we present a novel paradigm that attempts to extract noise-resistant features in its pipeline and introduces a noise-aware learning scheme to effectively improve the robustness of multimodal emotion understanding.
no code implementations • 1 Mar 2023 • Guanghao Yin, Zefan Qu, Xinyang Jiang, Shan Jiang, Zhenhua Han, Ningxin Zheng, Xiaohong Liu, Huan Yang, Yuqing Yang, Dongsheng Li, Lili Qiu
To facilitate the research on this problem, a new benchmark dataset named LDV-WebRTC is constructed based on a real-world online streaming system.
no code implementations • 1 Aug 2022 • Yuanyuan Liu, Wei Dai, Chuanxu Feng, Wenbin Wang, Guanghao Yin, Jiabei Zeng, Shiguang Shan
To the best of our knowledge, MAFW is the first in-the-wild multi-modal database annotated with compound emotion annotations and emotion-related captions.
Ranked #12 on Dynamic Facial Expression Recognition on MAFW
Dynamic Facial Expression Recognition Facial Expression Recognition +1
1 code implementation • 26 Feb 2022 • Guanghao Yin, Wei Wang, Zehuan Yuan, Chuchu Han, Wei Ji, Shouqian Sun, Changhu Wang
The comparisons of distribution differences between HQ and LQ images can help our model better assess the image quality.
1 code implementation • 8 Apr 2021 • Guanghao Yin, Wei Wang, Zehuan Yuan, Wei Ji, Dongdong Yu, Shouqian Sun, Tat-Seng Chua, Changhu Wang
We extract degradation prior at task-level with the proposed ConditionNet, which will be used to adapt the parameters of the basic SR network (BaseNet).
1 code implementation • 22 Aug 2020 • Guanghao Yin, Shou-qian Sun, Dian Yu, Dejian Li, Kejun Zhang
In this paper, our work makes an attempt to fuse the subject individual EDA features and the external evoked music features.
no code implementations • 1 Aug 2020 • Guanghao Yin, Shou-qian Sun, Chao Li, Xin Min
Firstly, the downsampling degradation GAN(DD-GAN) is trained to model the degradation and produces more various of LR images, which is validated to be efficient for data augmentation.
no code implementations • 10 Aug 2019 • Guanghao Yin, Shou-qian Sun, HUI ZHANG, Dian Yu, Chao Li, Ke-jun Zhang, Ning Zou
To the best of author's knowledge, our method is the first attempt to classify large scale subject-independent emotion with 7962 pieces of EDA signals from 457 subjects.