Search Results for author: Ziyun Qian

Found 6 papers, 2 papers with code

Toward Robust Incomplete Multimodal Sentiment Analysis via Hierarchical Representation Learning

no code implementations5 Nov 2024 Mingcheng Li, Dingkang Yang, Yang Liu, Shunli Wang, Jiawei Chen, Shuaibing Wang, Jinjie Wei, Yue Jiang, Qingyao Xu, Xiaolu Hou, Mingyang Sun, Ziyun Qian, Dongliang Kou, Lihua Zhang

Specifically, we propose a fine-grained representation factorization module that sufficiently extracts valuable sentiment information by factorizing modality into sentiment-relevant and modality-specific representations through crossmodal translation and sentiment semantic reconstruction.

Multimodal Sentiment Analysis Representation Learning

Faster Diffusion Action Segmentation

no code implementations4 Aug 2024 Shuaibing Wang, Shunli Wang, Mingcheng Li, Dingkang Yang, Haopeng Kuang, Ziyun Qian, Lihua Zhang

However, the heavy sampling steps required by diffusion models pose a substantial computational burden, limiting their practicality in real-time applications.

Action Segmentation Computational Efficiency +2

SMCD: High Realism Motion Style Transfer via Mamba-based Diffusion

no code implementations5 May 2024 Ziyun Qian, Zeyu Xiao, Zhenyi Wu, Dingkang Yang, Mingcheng Li, Shunli Wang, Shuaibing Wang, Dongliang Kou, Lihua Zhang

To address these problems, we consider style motion as a condition and propose the Style Motion Conditioned Diffusion (SMCD) framework for the first time, which can more comprehensively learn the style features of motion.

Mamba Motion Style Transfer +1

Correlation-Decoupled Knowledge Distillation for Multimodal Sentiment Analysis with Incomplete Modalities

no code implementations CVPR 2024 Mingcheng Li, Dingkang Yang, Xiao Zhao, Shuaibing Wang, Yan Wang, Kun Yang, Mingyang Sun, Dongliang Kou, Ziyun Qian, Lihua Zhang

Specifically, we present a sample-level contrastive distillation mechanism that transfers comprehensive knowledge containing cross-sample correlations to reconstruct missing semantics.

Disentanglement Knowledge Distillation +1

Can LLMs' Tuning Methods Work in Medical Multimodal Domain?

2 code implementations11 Mar 2024 Jiawei Chen, Yue Jiang, Dingkang Yang, Mingcheng Li, Jinjie Wei, Ziyun Qian, Lihua Zhang

In this paper, we delve into the fine-tuning methods of LLMs and conduct extensive experiments to investigate the impact of fine-tuning methods for large models on the existing multimodal model in the medical domain from the training data level and the model structure level.

Transfer Learning World Knowledge

Cannot find the paper you are looking for? You can Submit a new open access paper.