no code implementations • 26 May 2024 • Tianyun Yang, Juan Cao, Chang Xu
Experimental results show a significant enhancement in our model's ability to resist adversarial inputs, achieving nearly a 40% improvement in erasing the NSFW content and a 30% improvement in erasing artwork style.
1 code implementation • 29 Jul 2023 • Tianyun Yang, Juan Cao, Danding Wang, Chang Xu
The design of the synthesis technique is motivated by observations on how the basic generative model's architecture building blocks and parameters influence fingerprint patterns, and it is validated through two designed metrics that examine synthetic models' fidelity and diversity.
1 code implementation • CVPR 2023 • Tianyun Yang, Danding Wang, Fan Tang, Xinying Zhao, Juan Cao, Sheng Tang
In this study, we focus on a challenging task, namely Open-Set Model Attribution (OSMA), to simultaneously attribute images to known models and identify those from unknown ones.
1 code implementation • 28 Feb 2022 • Tianyun Yang, Ziyao Huang, Juan Cao, Lei LI, Xirong Li
With the rapid progress of generation technology, it has become necessary to attribute the origin of fake images.
no code implementations • 16 Jun 2021 • Tianyun Yang, Juan Cao, Qiang Sheng, Lei LI, Jiaqi Ji, Xirong Li, Sheng Tang
Adopting a multi-task framework, we propose a GAN Fingerprint Disentangling Network (GFD-Net) to simultaneously disentangle the fingerprint from GAN-generated images and produce a content-irrelevant representation for fake image attribution.
no code implementations • 13 Aug 2019 • Peng Qi, Juan Cao, Tianyun Yang, Junbo Guo, Jintao Li
In the real world, fake-news images may have significantly different characteristics from real-news images at both physical and semantic levels, which can be clearly reflected in the frequency and pixel domain, respectively.