We present a deep generative scene modeling technique for indoor
environments. Our goal is to train a generative model using a feed-forward
neural network that maps a prior distribution (e.g., a normal distribution) to
the distribution of primary objects in indoor scenes...
We introduce a 3D object
arrangement representation that models the locations and orientations of
objects, based on their size and shape attributes. Moreover, our scene
representation is applicable for 3D objects with different multiplicities
(repetition counts), selected from a database. We show a principled way to
train this model by combining discriminator losses for both a 3D object
arrangement representation and a 2D image-based representation. We demonstrate
the effectiveness of our scene representation and the deep learning method on
benchmark datasets. We also show the applications of this generative model in
scene interpolation and scene completion.