Feedback Adversarial Learning: Spatial Feedback for Improving Generative Adversarial Networks

CVPR 2019  ·  Minyoung Huh, Shao-Hua Sun, Ning Zhang ·

We propose feedback adversarial learning (FAL) framework that can improve existing generative adversarial networks by leveraging spatial feedback from the discriminator. We formulate the generation task as a recurrent framework, in which the discriminator's feedback is integrated into the feedforward path of the generation process. Specifically, the generator conditions on the discriminator's spatial output response, and its previous generation to improve generation quality over time - allowing the generator to attend and fix its previous mistakes. To effectively utilize the feedback, we propose an adaptive spatial transform layer, which learns to spatially modulate feature maps from its previous generation and the error signal from the discriminator. We demonstrate that one can easily adapt FAL to existing adversarial learning frameworks on a wide range of tasks, including image generation, image-to-image translation, and voxel generation.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here