Adversarial Out-domain Examples for Generative Models

7 Mar 2019  ·  Dario Pasquini, Marco Mingione, Massimo Bernaschi ·

Deep generative models are rapidly becoming a common tool for researchers and developers. However, as exhaustively shown for the family of discriminative models, the test-time inference of deep neural networks cannot be fully controlled and erroneous behaviors can be induced by an attacker. In the present work, we show how a malicious user can force a pre-trained generator to reproduce arbitrary data instances by feeding it suitable adversarial inputs. Moreover, we show that these adversarial latent vectors can be shaped so as to be statistically indistinguishable from the set of genuine inputs. The proposed attack technique is evaluated with respect to various GAN images generators using different architectures, training processes and for both conditional and not-conditional setups.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods