Search Results for author: Peiqing Yang

Found 3 papers, 2 papers with code

Upscale-A-Video: Temporal-Consistent Diffusion Model for Real-World Video Super-Resolution

no code implementations11 Dec 2023 Shangchen Zhou, Peiqing Yang, Jianyi Wang, Yihang Luo, Chen Change Loy

Text-based diffusion models have exhibited remarkable success in generation and editing, showing great promise for enhancing visual content with their generative prior.

Video Super-Resolution

LAVIE: High-Quality Video Generation with Cascaded Latent Diffusion Models

2 code implementations26 Sep 2023 Yaohui Wang, Xinyuan Chen, Xin Ma, Shangchen Zhou, Ziqi Huang, Yi Wang, Ceyuan Yang, Yinan He, Jiashuo Yu, Peiqing Yang, Yuwei Guo, Tianxing Wu, Chenyang Si, Yuming Jiang, Cunjian Chen, Chen Change Loy, Bo Dai, Dahua Lin, Yu Qiao, Ziwei Liu

To this end, we propose LaVie, an integrated video generation framework that operates on cascaded video latent diffusion models, comprising a base T2V model, a temporal interpolation model, and a video super-resolution model.

Text-to-Video Generation Video Generation +1

PGDiff: Guiding Diffusion Models for Versatile Face Restoration via Partial Guidance

1 code implementation NeurIPS 2023 Peiqing Yang, Shangchen Zhou, Qingyi Tao, Chen Change Loy

When combined with a diffusion prior, this partial guidance can deliver appealing results across a range of restoration tasks.

Cannot find the paper you are looking for? You can Submit a new open access paper.