Search Results for author: Ruixiang Jiang

Found 5 papers, 5 papers with code

MoPE: Parameter-Efficient and Scalable Multimodal Fusion via Mixture of Prompt Experts

1 code implementation14 Mar 2024 Ruixiang Jiang, Lingbo Liu, Changwen Chen

Building upon this disentanglement, we introduce the mixture of prompt experts (MoPE) technique to enhance expressiveness.

Disentanglement Multimodal Deep Learning +1

Conditional Prompt Tuning for Multimodal Fusion

1 code implementation28 Nov 2023 Ruixiang Jiang, Lingbo Liu, Changwen Chen

We show that the representation of one modality can effectively guide the prompting of another modality for parameter-efficient multimodal fusion.

CLIP-Count: Towards Text-Guided Zero-Shot Object Counting

1 code implementation12 May 2023 Ruixiang Jiang, Lingbo Liu, Changwen Chen

Specifically, we propose CLIP-Count, the first end-to-end pipeline that estimates density maps for open-vocabulary objects with text guidance in a zero-shot manner.

Cross-Part Crowd Counting Cross-Part Evaluation +6

AvatarCraft: Transforming Text into Neural Human Avatars with Parameterized Shape and Pose Control

1 code implementation ICCV 2023 Ruixiang Jiang, Can Wang, Jingbo Zhang, Menglei Chai, Mingming He, Dongdong Chen, Jing Liao

Neural implicit fields are powerful for representing 3D scenes and generating high-quality novel views, but it remains challenging to use such implicit representations for creating a 3D human avatar with a specific identity and artistic style that can be easily animated.

NeRF-Art: Text-Driven Neural Radiance Fields Stylization

1 code implementation15 Dec 2022 Can Wang, Ruixiang Jiang, Menglei Chai, Mingming He, Dongdong Chen, Jing Liao

As a powerful representation of 3D scenes, the neural radiance field (NeRF) enables high-quality novel view synthesis from multi-view images.

Contrastive Learning Novel View Synthesis

Cannot find the paper you are looking for? You can Submit a new open access paper.