Security of Facial Forensics Models Against Adversarial Attacks

2 Nov 2019  ·  Rong Huang, Fuming Fang, Huy H. Nguyen, Junichi Yamagishi, Isao Echizen ·

Deep neural networks (DNNs) have been used in digital forensics to identify fake facial images. We investigated several DNN-based forgery forensics models (FFMs) to examine whether they are secure against adversarial attacks. We experimentally demonstrated the existence of individual adversarial perturbations (IAPs) and universal adversarial perturbations (UAPs) that can lead a well-performed FFM to misbehave. Based on iterative procedure, gradient information is used to generate two kinds of IAPs that can be used to fabricate classification and segmentation outputs. In contrast, UAPs are generated on the basis of over-firing. We designed a new objective function that encourages neurons to over-fire, which makes UAP generation feasible even without using training data. Experiments demonstrated the transferability of UAPs across unseen datasets and unseen FFMs. Moreover, we conducted subjective assessment for imperceptibility of the adversarial perturbations, revealing that the crafted UAPs are visually negligible. These findings provide a baseline for evaluating the adversarial security of FFMs.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here