Reality Transform Adversarial Generators for Image Splicing Forgery Detection and Localization

ICCV 2021  ·  Xiuli Bi, Zhipeng Zhang, Bin Xiao ·

When many forged images become more and more realistic with the help of image editing tools and deep learning techniques, authenticators need to improve their ability to verify these forged images. The process of generating and detecting forged images is thus similar to the principle of Generative Adversarial Networks (GANs). Creating realistic forged images requires a retouching process to suppress tampering artifacts and keep structural information. We view this retouching process as image style transfer and then proposed the fake-to-realistic transformation generator GT. For detecting the tampered regions, a forgery localization generator GM is proposed based on a multi-decoder-single-task strategy. By adversarial training two generators, the proposed alpha-learnable whitening and coloring transformation (alpha-learnable WCT) block in GT automatically suppresses the tampering artifacts in the forged images. Meanwhile, the detection and localization abilities of GM will be improved by learning the forged images retouched by GT. The experimental results demonstrate that the proposed two generators in GAN can simulate confrontation between fakers and authenticators well. The localization generator GM outperforms the state-of-the-art methods in splicing forgery detection and localization on four public datasets.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here