no code implementations • 25 Mar 2024 • Ziyou Liang, Run Wang, Weifeng Liu, Yuyang Zhang, Wenyuan Yang, Lina Wang, Xingkai Wang
Unfortunately, the artifact patterns in fake images synthesized by different generative models are inconsistent, leading to the failure of previous research that relied on spotting subtle differences between real and fake.
no code implementations • 29 Feb 2024 • Yang Xu, Yunlin Tan, Cheng Zhang, Kai Chi, Peng Sun, Wenyuan Yang, Ju Ren, Hongbo Jiang, Yaoxue Zhang
This paper presents a robust watermark embedding scheme, named RobWE, to protect the ownership of personalized models in PFL.
no code implementations • 11 May 2023 • Junpei Liao, Zhikai Chen, Liang Yi, Wenyuan Yang, Baoyuan Wu, Xiaochun Cao
We apply adversarial attacks to VIF models and find that the VIF models are very vulnerable to adversarial examples.
no code implementations • 10 May 2023 • Wenyuan Yang, Gongxi Zhu, Yuguo Yin, Hanlin Gu, Lixin Fan, Qiang Yang, Xiaochun Cao
Federated learning allows multiple parties to collaborate in learning a global model without revealing private data.
no code implementations • 8 May 2023 • Wenyuan Yang, Yuguo Yin, Gongxi Zhu, Hanlin Gu, Lixin Fan, Xiaochun Cao, Qiang Yang
Federated learning (FL) allows multiple parties to cooperatively learn a federated model without sharing private data with each other.
no code implementations • 14 Nov 2022 • Shuo Shao, Wenyuan Yang, Hanlin Gu, Zhan Qin, Lixin Fan, Qiang Yang, Kui Ren
To deter such misbehavior, it is essential to establish a mechanism for verifying the ownership of the model and as well tracing its origin to the leaker among the FL participants.
no code implementations • 14 Nov 2022 • Wenyuan Yang, Shuo Shao, Yue Yang, Xiyao Liu, Ximeng Liu, Zhihua Xia, Gerald Schaefer, Hui Fang
In this paper, we propose a novel client-side FL watermarking scheme to tackle the copyright protection issue in secure FL with HE.