Search Results for author: Nisha Huang

Found 8 papers, 8 papers with code

CreativeSynth: Creative Blending and Synthesis of Visual Arts based on Multimodal Diffusion

1 code implementation25 Jan 2024 Nisha Huang, WeiMing Dong, Yuxin Zhang, Fan Tang, Ronghui Li, Chongyang Ma, Xiu Li, Changsheng Xu

Large-scale text-to-image generative models have made impressive strides, showcasing their ability to synthesize a vast array of high-quality images.

Image Generation Style Transfer

MotionCrafter: One-Shot Motion Customization of Diffusion Models

1 code implementation8 Dec 2023 Yuxin Zhang, Fan Tang, Nisha Huang, Haibin Huang, Chongyang Ma, WeiMing Dong, Changsheng Xu

The essence of a video lies in its dynamic motions, including character actions, object movements, and camera movements.

Disentanglement Motion Disentanglement +3

ProSpect: Prompt Spectrum for Attribute-Aware Personalization of Diffusion Models

3 code implementations25 May 2023 Yuxin Zhang, WeiMing Dong, Fan Tang, Nisha Huang, Haibin Huang, Chongyang Ma, Tong-Yee Lee, Oliver Deussen, Changsheng Xu

We apply ProSpect in various personalized attribute-aware image generation applications, such as image-guided or text-driven manipulations of materials, style, and layout, achieving previously unattainable results from a single image input without fine-tuning the diffusion models.

Attribute Disentanglement +1

Style-A-Video: Agile Diffusion for Arbitrary Text-based Video Style Transfer

1 code implementation9 May 2023 Nisha Huang, Yuxin Zhang, WeiMing Dong

Large-scale text-to-video diffusion models have demonstrated an exceptional ability to synthesize diverse videos.

Denoising Style Transfer +1

Region-Aware Diffusion for Zero-shot Text-driven Image Editing

1 code implementation23 Feb 2023 Nisha Huang, Fan Tang, WeiMing Dong, Tong-Yee Lee, Changsheng Xu

Different from current mask-based image editing methods, we propose a novel region-aware diffusion model (RDM) for entity-level image editing, which could automatically locate the region of interest and replace it following given text prompts.

Image Manipulation

Inversion-Based Style Transfer with Diffusion Models

1 code implementation CVPR 2023 Yuxin Zhang, Nisha Huang, Fan Tang, Haibin Huang, Chongyang Ma, WeiMing Dong, Changsheng Xu

Our key idea is to learn artistic style directly from a single painting and then guide the synthesis without providing complex textual descriptions.

Denoising Style Transfer +1

DiffStyler: Controllable Dual Diffusion for Text-Driven Image Stylization

1 code implementation19 Nov 2022 Nisha Huang, Yuxin Zhang, Fan Tang, Chongyang Ma, Haibin Huang, Yong Zhang, WeiMing Dong, Changsheng Xu

Despite the impressive results of arbitrary image-guided style transfer methods, text-driven image stylization has recently been proposed for transferring a natural image into a stylized one according to textual descriptions of the target style provided by the user.

Denoising Image Stylization

Draw Your Art Dream: Diverse Digital Art Synthesis with Multimodal Guided Diffusion

1 code implementation27 Sep 2022 Nisha Huang, Fan Tang, WeiMing Dong, Changsheng Xu

Extensive experimental results on the quality and quantity of the generated digital art paintings confirm the effectiveness of the combination of the diffusion model and multimodal guidance.

Cannot find the paper you are looking for? You can Submit a new open access paper.