Image Generation Models

Blended Diffusion

Introduced by Avrahami et al. in Blended Diffusion for Text-driven Editing of Natural Images

Blended Diffusion enables a zero-shot local text-guided image editing of natural images. Given an input image $x$, an input mask $m$ and a target guiding text $t$ - the method enables to change the masked area within the image corresponding the the guiding text s.t. the unmasked area is left unchanged.

Source: Blended Diffusion for Text-driven Editing of Natural Images

Papers


Paper Code Results Date Stars

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories