no code implementations • 21 Mar 2025 • Ran Liu, Fengyu Zhang, Cong Yu, Longjiang Yang, Zhuofan Wen, Siyuan Zhang, Hailiang Yao, Shun Chen, Zheng Lian, Bin Liu
This article presents our results for the eighth Affective Behavior Analysis in-the-wild (ABAW) competition. Multimodal emotion recognition (ER) has important applications in affective computing and human-computer interaction.
no code implementations • 25 Nov 2024 • Zhuofan Wen, Shangtong Gui, Yang Feng
In this paper, we focus on how to improve the performance of the draft model and aim to accelerate inference via a high acceptance rate.
2 code implementations • 26 Apr 2024 • Zheng Lian, Haiyang Sun, Licai Sun, Zhuofan Wen, Siyuan Zhang, Shun Chen, Hao Gu, Jinming Zhao, Ziyang Ma, Xie Chen, Jiangyan Yi, Rui Liu, Kele Xu, Bin Liu, Erik Cambria, Guoying Zhao, Björn W. Schuller, JianHua Tao
However, this process may lead to inaccurate annotations, such as ignoring non-majority or non-candidate labels.
no code implementations • 22 Mar 2024 • Zhuofan Wen, Fengyu Zhang, Siyuan Zhang, Haiyang Sun, Mingyu Xu, Licai Sun, Zheng Lian, Bin Liu, JianHua Tao
Multimodal fusion is a significant method for most multimodal tasks.
1 code implementation • 7 Dec 2023 • Zheng Lian, Licai Sun, Haiyang Sun, Kang Chen, Zhuofan Wen, Hao Gu, Bin Liu, JianHua Tao
To bridge this gap, we present the quantitative evaluation results of GPT-4V on 21 benchmark datasets covering 6 tasks: visual sentiment analysis, tweet sentiment analysis, micro-expression recognition, facial emotion recognition, dynamic facial emotion recognition, and multimodal emotion recognition.