Generative Image Modeling Using Spatial LSTMs

NeurIPS 2015  ·  Lucas Theis, Matthias Bethge ·

Modeling the distribution of natural images is challenging, partly because of strong statistical dependencies which can extend over hundreds of pixels. Recurrent neural networks have been successful in capturing long-range dependencies in a number of problems but only recently have found their way into generative image models. We here introduce a recurrent image model based on multi-dimensional long short-term memory units which are particularly suited for image modeling due to their spatial structure. Our model scales to images of arbitrary size and its likelihood is computationally tractable. We find that it outperforms the state of the art in quantitative comparisons on several image datasets and produces promising results when used for texture synthesis and inpainting.

PDF Abstract NeurIPS 2015 PDF NeurIPS 2015 Abstract

Datasets


Results from the Paper


Ranked #59 on Image Generation on CIFAR-10 (bits/dimension metric)

     Get a GitHub badge

Results from Other Papers


Task Dataset Model Metric Name Metric Value Rank Source Paper Compare
Image Generation CIFAR-10 RIDE bits/dimension 3.47 # 59

Methods


No methods listed for this paper. Add relevant methods here