We observe that despite their hierarchical convolutional nature, the synthesis process of typical generative adversarial networks depends on absolute pixel coordinates in an unhealthy manner.
Ranked #1 on Image Generation on FFHQ-U
We present a modular differentiable renderer design that yields performance superior to previous methods by leveraging existing, highly optimized hardware graphics pipelines.
We also find that the widely used CIFAR-10 is, in fact, a limited data benchmark, and improve the record FID from 5. 59 to 2. 42.
Ranked #1 on Image Generation on FFHQ 1024 x 1024
Overall, our improved model redefines the state of the art in unconditional image modeling, both in terms of existing distribution quality metrics as well as perceived image quality.
Ranked #1 on Image Generation on FFHQ
We analyze the problem of semantic segmentation and find that its' distribution does not exhibit low density regions separating classes and offer this as an explanation for why semi-supervised segmentation is a challenging problem, with only a few reports of success.
The ability to automatically estimate the quality and coverage of the samples produced by a generative model is a vital requirement for driving algorithm research.
Ranked #5 on Image Generation on FFHQ
We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature.
Ranked #1 on Image Generation on LSUN Bedroom
We apply basic statistical reasoning to signal reconstruction by machine learning -- learning to map corrupted observations to clean signals -- with a simple and powerful conclusion: it is possible to learn to restore images by only looking at corrupted examples, at performance at and sometimes exceeding training using clean data, without explicit image priors or likelihood models of the corruption.
We describe a new training methodology for generative adversarial networks.
Ranked #3 on Image Generation on CelebA-HQ 256x256
In this paper, we present a simple and efficient method for training deep neural networks in a semi-supervised setting where only a small portion of training data is labeled.
We present a real-time deep learning framework for video-based facial performance capture -- the dense 3D tracking of an actor's face given a monocular video.