no code implementations • 26 May 2024 • Jinlin Liu, Kai Yu, Mengyang Feng, Xiefan Guo, Miaomiao Cui
Training on real-world videos enhanced with this innovative motion depiction approach, our model generates videos exhibiting coherent movement in both foreground subjects and their surrounding contexts.
1 code implementation • 6 Apr 2024 • Xiefan Guo, Jinlin Liu, Miaomiao Cui, Jiankai Li, Hongyu Yang, Di Huang
Recent strides in the development of diffusion models, exemplified by advancements such as Stable Diffusion, have underscored their remarkable prowess in generating visually compelling images.
no code implementations • 8 Dec 2023 • Mengyang Feng, Jinlin Liu, Kai Yu, Yuan YAO, Zheng Hui, Xiefan Guo, Xianhui Lin, Haolan Xue, Chen Shi, Xiaowen Li, Aojie Li, Xiaoyang Kang, Biwen Lei, Miaomiao Cui, Peiran Ren, Xuansong Xie
In this paper, we present DreaMoving, a diffusion-based controllable video generation framework to produce high-quality customized human videos.
1 code implementation • CVPR 2022 • Biwen Lei, Xiefan Guo, Hongyu Yang, Miaomiao Cui, Xuansong Xie, Di Huang
The network is mainly composed of two components: a context-aware local retouching layer (LRL) and an adaptive blend pyramid layer (BPL).
4 code implementations • ICCV 2021 • Xiefan Guo, Hongyu Yang, Di Huang
Deep generative approaches have recently made considerable progress in image inpainting by introducing structure priors.