Towards a Deeper Understanding of Adversarial Losses under a Discriminative Adversarial Network Setting

25 Jan 2019  ·  Hao-Wen Dong, Yi-Hsuan Yang ·

Recent work has proposed various adversarial loss functions for training either generative or discriminative models. Yet, it remains unclear what certain types of functions are valid adversarial losses, and how these loss functions perform against one another. In this paper, we aim to gain a deeper understanding of adversarial losses by decoupling the effects of their component functions and regularization terms. We first derive in theory some necessary and sufficient conditions of the component functions such that the adversarial loss is a divergence-like measure between the data and the model distributions. In order to systematically compare different adversarial losses, we then propose a new, simple comparative framework, dubbed DANTest, based on discriminative adversarial networks (DANs). With this framework, we evaluate an extensive set of adversarial losses by combining different component functions and regularization approaches. Our theoretical and empirical results can together serve as a reference for choosing or designing adversarial training objectives in future research.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here