Blended Diffusion enables a zero-shot local text-guided image editing of natural images. Given an input image $x$, an input mask $m$ and a target guiding text $t$ - the method enables to change the masked area within the image corresponding the the guiding text s.t. the unmasked area is left unchanged.
Source: Blended Diffusion for Text-driven Editing of Natural ImagesPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Image Generation | 2 | 18.18% |
Image Inpainting | 2 | 18.18% |
text-guided-image-editing | 2 | 18.18% |
Text-to-Image Generation | 2 | 18.18% |
Zero-Shot Text-to-Image Generation | 2 | 18.18% |
Super-Resolution | 1 | 9.09% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |