Search Results for author: Zhuofan Wen

Found 5 papers, 2 papers with code

Feature-Based Dual Visual Feature Extraction Model for Compound Multimodal Emotion Recognition

no code implementations21 Mar 2025 Ran Liu, Fengyu Zhang, Cong Yu, Longjiang Yang, Zhuofan Wen, Siyuan Zhang, Hailiang Yao, Shun Chen, Zheng Lian, Bin Liu

This article presents our results for the eighth Affective Behavior Analysis in-the-wild (ABAW) competition. Multimodal emotion recognition (ER) has important applications in affective computing and human-computer interaction.

Multimodal Emotion Recognition

Speculative Decoding with CTC-based Draft Model for LLM Inference Acceleration

no code implementations25 Nov 2024 Zhuofan Wen, Shangtong Gui, Yang Feng

In this paper, we focus on how to improve the performance of the draft model and aim to accelerate inference via a high acceptance rate.

GPT-4V with Emotion: A Zero-shot Benchmark for Generalized Emotion Recognition

1 code implementation7 Dec 2023 Zheng Lian, Licai Sun, Haiyang Sun, Kang Chen, Zhuofan Wen, Hao Gu, Bin Liu, JianHua Tao

To bridge this gap, we present the quantitative evaluation results of GPT-4V on 21 benchmark datasets covering 6 tasks: visual sentiment analysis, tweet sentiment analysis, micro-expression recognition, facial emotion recognition, dynamic facial emotion recognition, and multimodal emotion recognition.

Facial Emotion Recognition Micro Expression Recognition +3

Cannot find the paper you are looking for? You can Submit a new open access paper.