Indoor Scene Synthesis

8 papers with code • 1 benchmarks • 2 datasets

This task has no description! Would you like to contribute one?


Use these libraries to find Indoor Scene Synthesis models and implementations
2 papers

Most implemented papers

Fast and Flexible Indoor Scene Synthesis via Deep Convolutional Generative Models

nv-tlabs/atiss CVPR 2019

We present a new, fast and flexible pipeline for indoor scene synthesis that is based on deep convolutional generative models.

Human-centric Indoor Scene Synthesis Using Stochastic Grammar

SiyuanQi/human-centric-scene-synthesis CVPR 2018

We present a human-centric method to sample and synthesize 3D room layouts and 2D images thereof, to obtain large-scale 2D/3D image data with perfect per-pixel ground truth.

End-to-End Optimization of Scene Layout

aluo-x/3D_SLN CVPR 2020

Experiments suggest that our model achieves higher accuracy and diversity in conditional scene synthesis and allows exemplar-based scene generation from various input forms.

ATISS: Autoregressive Transformers for Indoor Scene Synthesis

nv-tlabs/atiss NeurIPS 2021

The ability to synthesize realistic and diverse indoor furniture layouts automatically or based on partial input, unlocks many applications, from better interactive 3D tools to data synthesis for training and simulation.

LUMINOUS: Indoor Scene Generation for Embodied AI Challenges

amazon-research/indoor-scene-generation-eai 10 Nov 2021

However, current simulators for Embodied AI (EAI) challenges only provide simulated indoor scenes with a limited number of layouts.

DiffuScene: Denoising Diffusion Models for Generative Indoor Scene Synthesis

tangjiapeng/diffuscene 24 Mar 2023

We introduce a diffusion network to synthesize a collection of 3D indoor objects by denoising a set of unordered object attributes.

LayoutGPT: Compositional Visual Planning and Generation with Large Language Models

weixi-feng/layoutgpt NeurIPS 2023

When combined with a downstream image generation model, LayoutGPT outperforms text-to-image models/systems by 20-40% and achieves comparable performance as human users in designing visual layouts for numerical and spatial correctness.

Language-driven Scene Synthesis using Multi-conditional Diffusion Model

andvg3/LSDM NeurIPS 2023

In this paper, we propose a language-driven scene synthesis task, which is a new task that integrates text prompts, human motion, and existing objects for scene synthesis.