Layout-to-Image Generation
16 papers with code • 7 benchmarks • 4 datasets
Layout-to-image generation its the task to generate a scene based on the given layout. The layout describes the location of the objects to be included in the output image. In this section, you can find state-of-the-art leaderboards for Layout-to-image generation.
Most implemented papers
High-Resolution Image Synthesis with Latent Diffusion Models
By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond.
Image Generation from Scene Graphs
To overcome this limitation we propose a method for generating images from scene graphs, enabling explicitly reasoning about objects and their relationships.
Image Synthesis From Reconfigurable Layout and Style
Despite remarkable recent progress on both unconditional and conditional image synthesis, it remains a long-standing problem to learn generative models that are capable of synthesizing realistic and sharp images from reconfigurable spatial layout (i. e., bounding boxes + class labels in an image lattice) and style (i. e., structural and appearance variations encoded by latent vectors), especially at high resolution.
Learning Layout and Style Reconfigurable GANs for Controllable Image Synthesis
This paper focuses on a recent emerged task, layout-to-image, to learn generative models that are capable of synthesizing photo-realistic images from spatial layout (i. e., object bounding boxes configured in an image lattice) and style (i. e., structural and appearance variations encoded by latent vectors).
Specifying Object Attributes and Relations in Interactive Scene Generation
We introduce a method for the generation of images from an input scene graph.
Learning Canonical Representations for Scene Graph to Image Generation
Generating realistic images of complex visual scenes becomes challenging when one wishes to control the structure of the generated images.
Semantic Image Manipulation Using Scene Graphs
In our work, we address the novel problem of image manipulation from scene graphs, in which a user can edit images by merely applying changes in the nodes or edges of a semantic graph that is generated from the image.
Generating Annotated High-Fidelity Images Containing Multiple Coherent Objects
In particular, layout-to-image generation models have gained significant attention due to their capability to generate realistic complex images containing distinct objects.
Context-Aware Layout to Image Generation with Enhanced Object Appearance
We argue that these are caused by the lack of context-aware object and stuff feature encoding in their generators, and location-sensitive appearance representation in their discriminators.
AttrLostGAN: Attribute Controlled Image Synthesis from Reconfigurable Layout and Style
In this paper, we propose a method for attribute controlled image synthesis from layout which allows to specify the appearance of individual objects without affecting the rest of the image.