Layout-to-Image Generation

18 papers with code • 7 benchmarks • 4 datasets

Layout-to-image generation its the task to generate a scene based on the given layout. The layout describes the location of the objects to be included in the output image. In this section, you can find state-of-the-art leaderboards for Layout-to-image generation.

Latest papers with no code

ObjBlur: A Curriculum Learning Approach With Progressive Object-Level Blurring for Improved Layout-to-Image Generation

no code yet • 11 Apr 2024

We present ObjBlur, a novel curriculum learning approach to improve layout-to-image generation models, where the task is to produce realistic images from layouts composed of boxes and labels.

Layout-to-Image Generation with Localized Descriptions using ControlNet with Cross-Attention Control

no code yet • 20 Feb 2024

While text-to-image diffusion models can generate highquality images from textual descriptions, they generally lack fine-grained control over the visual composition of the generated images.

Divide and Conquer: Language Models can Plan and Self-Correct for Compositional Text-to-Image Generation

no code yet • 28 Jan 2024

Given a complex text prompt containing multiple concepts including objects, attributes, and relationships, the LLM agent initially decomposes it, which entails the extraction of individual objects, their associated attributes, and the prediction of a coherent scene layout.

SmartMask: Context Aware High-Fidelity Mask Generation for Fine-grained Object Insertion and Layout Control

no code yet • 8 Dec 2023

To overcome these limitations, we introduce SmartMask, which allows any novice user to create detailed masks for precise object insertion.

SSMG: Spatial-Semantic Map Guided Diffusion Model for Free-form Layout-to-Image Generation

no code yet • 20 Aug 2023

Despite significant progress in Text-to-Image (T2I) generative models, even lengthy and complex text descriptions still struggle to convey detailed controls.

LAW-Diffusion: Complex Scene Generation by Diffusion with Layouts

no code yet • ICCV 2023

Thanks to the rapid development of diffusion models, unprecedented progress has been witnessed in image synthesis.

GeoDiffusion: Text-Prompted Geometric Control for Object Detection Data Generation

no code yet • 7 Jun 2023

Diffusion models have attracted significant attention due to the remarkable ability to create content and generate data for tasks like image classification.

Guided Image Synthesis via Initial Image Editing in Diffusion Model

no code yet • 5 May 2023

Diffusion models have the ability to generate high quality images by denoising pure Gaussian noise images.

LayoutDiffuse: Adapting Foundational Diffusion Models for Layout-to-Image Generation

no code yet • 16 Feb 2023

Layout-to-image generation refers to the task of synthesizing photo-realistic images based on semantic layouts.

Object-Centric Image Generation from Layouts

no code yet • 16 Mar 2020

In this paper, we start with the idea that a model must be able to understand individual objects and relationships between objects in order to generate complex scenes well.