Search Results for author: Anbo Dai

Found 2 papers, 0 papers with code

Edit Temporal-Consistent Videos with Image Diffusion Model

no code implementations17 Aug 2023 Yuanzhi Wang, Yong Li, Xiaoya Zhang, Xin Liu, Anbo Dai, Antoni B. Chan, Zhen Cui

In addition to the utilization of a pretrained T2I 2D Unet for spatial content manipulation, we establish a dedicated temporal Unet architecture to faithfully capture the temporal coherence of the input video sequences.

Video Temporal Consistency

Dual-Stream Diffusion Net for Text-to-Video Generation

no code implementations16 Aug 2023 Binhui Liu, Xin Liu, Anbo Dai, Zhiyong Zeng, Dan Wang, Zhen Cui, Jian Yang

In particular, the designed two diffusion streams, video content and motion branches, could not only run separately in their private spaces for producing personalized video variations as well as content, but also be well-aligned between the content and motion domains through leveraging our designed cross-transformer interaction module, which would benefit the smoothness of generated videos.

Text-to-Video Generation Video Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.