no code implementations • 3 Mar 2024 • Meiling Li, Zhenxing Qian, Xinpeng Zhang
Comprehensive experiments reveal that (1) Our method can effectively attribute fake images to their source models, achieving comparable attribution performance with the state-of-the-art method; (2) Our method has high scalability ability, which is well adapted to real-world attribution scenarios.
no code implementations • 5 Jan 2024 • Meiling Li, Nan Zhong, Xinpeng Zhang, Zhenxing Qian, Sheng Li
After training with the poisoned data, the attacked model behaves normally on benign images, but for poisoned images, the model will generate some sentences irrelevant to the given image.
no code implementations • 7 Jul 2023 • Guobiao Li, Sheng Li, Meiling Li, Zhenxing Qian, Xinpeng Zhang
In this paper, we propose deep network steganography for the covert communication of DNN models.
1 code implementation • 28 Feb 2023 • Guobiao Li, Sheng Li, Meiling Li, Xinpeng Zhang, Zhenxing Qian
We propose to disguise a steganographic network (termed as the secret DNN model) into a stego DNN model which performs an ordinary machine learning task (termed as the stego task).
no code implementations • 29 Dec 2022 • Haoyue Wang, Meiling Li, Sheng Li, Zhenxing Qian, Xinpeng Zhang
As one of the important face features, the face depth map, which has shown to be effective in other areas such as the face recognition or face detection, is unfortunately paid little attention to in literature for detecting the manipulated face images.
no code implementations • 30 Jan 2022 • Xinghe Chu, Zhaoming Lu, David Gesbert, Luhan Wang, Xiangming Wen, Muqing Wu, Meiling Li
This approach exploits an initial (e. g. GPS-based) vehicle position information and allows subsequent tracking of vehicles by exploiting the shared nature of virtual transmitters associated to the reflecting surfaces.