Coarse building mass models are now routinely generated at scales ranging
from individual buildings through to whole cities. For example, they can be
abstracted from raw measurements, generated procedurally, or created manually...
However, these models typically lack any meaningful semantic or texture
details, making them unsuitable for direct display. We introduce the problem of
automatically and realistically decorating such models by adding semantically
consistent geometric details and textures. Building on the recent success of
generative adversarial networks (GANs), we propose FrankenGAN, a cascade of
GANs to create plausible details across multiple scales over large
neighborhoods. The various GANs are synchronized to produce consistent style
distributions over buildings and neighborhoods. We provide the user with direct
control over the variability of the output. We allow her to interactively
specify style via images and manipulate style-adapted sliders to control style
variability. We demonstrate our system on several large-scale examples. The
generated outputs are qualitatively evaluated via a set of user studies and are
found to be realistic, semantically-plausible, and style-consistent.