Learning Gibbs-regularized GANs with variational discriminator reparameterization

27 Sep 2018  ·  Nicholas Rhinehart, Anqi Liu, Kihyuk Sohn, Paul Vernaza ·

We propose a novel approach to regularizing generative adversarial networks (GANs) leveraging learned {\em structured Gibbs distributions}. Our method consists of reparameterizing the discriminator to be an explicit function of two densities: the generator PDF $q$ and a structured Gibbs distribution $\nu$. Leveraging recent work on invertible pushforward density estimators, this reparameterization is made possible by assuming the generator is invertible, which enables the analytic evaluation of the generator PDF $q$. We further propose optimizing the Jeffrey divergence, which balances mode coverage with sample quality. The combination of this loss and reparameterization allows us to effectively regularize the generator by imposing structure from domain knowledge on $\nu$, as in classical graphical models. Applying our method to a vehicle trajectory forecasting task, we observe that we are able to obtain quantitatively superior mode coverage as well as better-quality samples compared to traditional methods.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here