39 papers with code • 3 benchmarks • 5 datasets
LibrariesUse these libraries to find Scene Generation models and implementations
We have witnessed rapid progress on 3D-aware image synthesis, leveraging recent advances in generative visual models and neural rendering.
We propose a new probabilistic programming language for the design and analysis of perception systems, especially those based on machine learning.
In this paper we propose a neural message passing approach to augment an input 3D indoor scene with new objects matching their surroundings.
Generating realistic images of complex visual scenes becomes challenging when one wishes to control the structure of the generated images.
Local Class-Specific and Global Image-Level Generative Adversarial Networks for Semantic-Guided Scene Generation
To tackle this issue, in this work we consider learning the scene generation in a local context, and correspondingly design a local class-specific generative network with semantic maps as a guidance, which separately constructs and learns sub-generators concentrating on the generation of different classes, and is able to provide more scene details.
One particular requirement for such robots is that they are able to understand spatial relations and can place objects in accordance with the spatial relations expressed by their user.