You Only Need Adversarial Supervision for Semantic Image Synthesis

Despite their recent successes, GAN models for semantic image synthesis still suffer from poor image quality when trained with only adversarial supervision. Historically, additionally employing the VGG-based perceptual loss has helped to overcome this issue, significantly improving the synthesis quality, but at the same time limiting the progress of GAN models for semantic image synthesis. In this work, we propose a novel, simplified GAN model, which needs only adversarial supervision to achieve high quality results. We re-design the discriminator as a semantic segmentation network, directly using the given semantic label maps as the ground truth for training. By providing stronger supervision to the discriminator as well as to the generator through spatially- and semantically-aware discriminator feedback, we are able to synthesize images of higher fidelity with better alignment to their input label maps, making the use of the perceptual loss superfluous. Moreover, we enable high-quality multi-modal image synthesis through global and local sampling of a 3D noise tensor injected into the generator, which allows complete or partial image change. We show that images synthesized by our model are more diverse and follow the color and texture distributions of real images more closely. We achieve an average improvement of $6$ FID and $5$ mIoU points over the state of the art across different datasets using only adversarial supervision.

PDF Abstract ICLR 2021 PDF ICLR 2021 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image-to-Image Translation ADE20K Labels-to-Photos OASIS mIoU 48.8 # 4
FID 28.3 # 4
LPIPS 0.265 # 2
Image-to-Image Translation ADE20K-Outdoor Labels-to-Photos OASIS mIoU 40.4 # 1
FID 48.6 # 3
Image-to-Image Translation Cityscapes Labels-to-Photo OASIS mIoU 69.3 # 3
FID 47.7 # 3
LPIPS 0.275 # 1
Image-to-Image Translation COCO-Stuff Labels-to-Photos OASIS mIoU 44.1 # 1
FID 17.0 # 5

Methods