Search Results for author: Xiaoqian Ye

Found 3 papers, 2 papers with code

Anywhere: A Multi-Agent Framework for Reliable and Diverse Foreground-Conditioned Image Inpainting

no code implementations29 Apr 2024 Tianyidan Xie, Rui Ma, Qian Wang, Xiaoqian Ye, Feixuan Liu, Ying Tai, Zhenyu Zhang, Zili Yi

In the image generation module, we employ a text-guided canny-to-image generation model to create a template image based on the edge map of the foreground image and language prompts, and an image refiner to produce the outcome by blending the input foreground and the template image.

Image Inpainting Language Modelling +1

AddSR: Accelerating Diffusion-based Blind Super-Resolution with Adversarial Diffusion Distillation

1 code implementation2 Apr 2024 Rui Xie, Ying Tai, Chen Zhao, Kai Zhang, Zhenyu Zhang, Jun Zhou, Xiaoqian Ye, Qian Wang, Jian Yang

Blind super-resolution methods based on stable diffusion showcase formidable generative capabilities in reconstructing clear high-resolution images with intricate details from low-resolution inputs.

Blind Super-Resolution Super-Resolution

Cannot find the paper you are looking for? You can Submit a new open access paper.