Search Results for author: Peng Zhai

Found 11 papers, 5 papers with code

Role Play: Learning Adaptive Role-Specific Strategies in Multi-Agent Interactions

no code implementations2 Nov 2024 Weifan Long, Wen Wen, Peng Zhai, Lihua Zhang

It trains a common policy with role embedding observations and employs a role predictor to estimate the joint role embeddings of other agents, helping the learning agent adapt to its assigned role.

Diversity Multi-agent Reinforcement Learning +1

Large Vision-Language Models as Emotion Recognizers in Context Awareness

no code implementations16 Jul 2024 Yuxuan Lei, Dingkang Yang, Zhaoyu Chen, Jiawei Chen, Peng Zhai, Lihua Zhang

Extensive experiments and analyses demonstrate that LVLMs achieve competitive performance in the CAER task across different paradigms.

Emotion Recognition In-Context Learning

Asynchronous Multimodal Video Sequence Fusion via Learning Modality-Exclusive and -Agnostic Representations

no code implementations6 Jul 2024 Dingkang Yang, Mingcheng Li, Linhao Qu, Kun Yang, Peng Zhai, Song Wang, Lihua Zhang

To tackle these issues, we propose a Multimodal fusion approach for learning modality-Exclusive and modality-Agnostic representations (MEA) to refine multimodal features and leverage the complementarity across distinct modalities.

PediatricsGPT: Large Language Models as Chinese Medical Assistants for Pediatric Applications

1 code implementation29 May 2024 Dingkang Yang, Jinjie Wei, Dongling Xiao, Shunli Wang, Tong Wu, Gang Li, Mingcheng Li, Shuaibing Wang, Jiawei Chen, Yue Jiang, Qingyao Xu, Ke Li, Peng Zhai, Lihua Zhang

In the parameter-efficient secondary SFT phase, a mixture of universal-specific experts strategy is presented to resolve the competency conflict between medical generalist and pediatric expertise mastery.

Domain Adaptation

Towards Multimodal Sentiment Analysis Debiasing via Bias Purification

no code implementations8 Mar 2024 Dingkang Yang, Mingcheng Li, Dongling Xiao, Yang Liu, Kun Yang, Zhaoyu Chen, Yuzheng Wang, Peng Zhai, Ke Li, Lihua Zhang

In the inference phase, given a factual multimodal input, MCIS imagines two counterfactual scenarios to purify and mitigate these biases.

counterfactual Counterfactual Inference +1

Context De-confounded Emotion Recognition

1 code implementation CVPR 2023 Dingkang Yang, Zhaoyu Chen, Yuzheng Wang, Shunli Wang, Mingcheng Li, Siao Liu, Xiao Zhao, Shuai Huang, Zhiyan Dong, Peng Zhai, Lihua Zhang

However, a long-overlooked issue is that a context bias in existing datasets leads to a significantly unbalanced distribution of emotional states among different context scenarios.

Emotion Recognition

CA-SpaceNet: Counterfactual Analysis for 6D Pose Estimation in Space

1 code implementation16 Jul 2022 Shunli Wang, Shuaibing Wang, Bo Jiao, Dingkang Yang, Liuzhen Su, Peng Zhai, Chixiao Chen, Lihua Zhang

Considering that the pose estimator is sensitive to background interference, this paper proposes a counterfactual analysis framework named CASpaceNet to complete robust 6D pose estimation of the spaceborne targets under complicated background.

6D Pose Estimation Causal Inference +2

TSA-Net: Tube Self-Attention Network for Action Quality Assessment

2 code implementations11 Jan 2022 Shunli Wang, Dingkang Yang, Peng Zhai, Chixiao Chen, Lihua Zhang

Specifically, we introduce a single object tracker into AQA and propose the Tube Self-Attention Module (TSA), which can efficiently generate rich spatio-temporal contextual information by adopting sparse feature interactions.

Action Assessment Action Quality Assessment +2

Cannot find the paper you are looking for? You can Submit a new open access paper.