Search Results for author: Chun-Hao Paul Huang

Found 5 papers, 1 papers with code

ActAnywhere: Subject-Aware Video Background Generation

no code implementations19 Jan 2024 Boxiao Pan, Zhan Xu, Chun-Hao Paul Huang, Krishna Kumar Singh, Yang Zhou, Leonidas J. Guibas, Jimei Yang

Generating video background that tailors to foreground subject motion is an important problem for the movie industry and visual effects community.

Generative Rendering: Controllable 4D-Guided Video Generation with 2D Diffusion Models

no code implementations3 Dec 2023 Shengqu Cai, Duygu Ceylan, Matheus Gadelha, Chun-Hao Paul Huang, Tuanfeng Yang Wang, Gordon Wetzstein

Traditional 3D content creation tools empower users to bring their imagination to life by giving them direct control over a scene's geometry, appearance, motion, and camera path.

Text-to-Image Generation Video Generation

BLiSS: Bootstrapped Linear Shape Space

no code implementations4 Sep 2023 Sanjeev Muralikrishnan, Chun-Hao Paul Huang, Duygu Ceylan, Niloy J. Mitra

Morphable models are fundamental to numerous human-centered processes as they offer a simple yet expressive shape space.

Pix2Video: Video Editing using Image Diffusion

1 code implementation ICCV 2023 Duygu Ceylan, Chun-Hao Paul Huang, Niloy J. Mitra

Our method works in two simple steps: first, we use a pre-trained structure-guided (e. g., depth) image diffusion model to perform text-guided edits on an anchor frame; then, in the key step, we progressively propagate the changes to the future frames via self-attention feature injection to adapt the core denoising step of the diffusion model.

Denoising Text Generation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.