1 code implementation • 30 Oct 2023 • Minxing Zhang, Ning Yu, Rui Wen, Michael Backes, Yang Zhang
Several membership inference attacks (MIAs) have been proposed to exhibit the privacy vulnerability of generative models by classifying a query image as a training dataset member or nonmember.
1 code implementation • 6 Oct 2023 • Minxing Zhang, Michael Backes, Xiao Zhang
Recent studies have shown that deep neural networks are vulnerable to adversarial examples.
no code implementations • 30 May 2023 • Yun Li, Dazhou Yu, Zhenke Liu, Minxing Zhang, Xiaoyun Gong, Liang Zhao
Graph neural networks (GNNs) have emerged as a powerful tool for modeling and understanding data with dependencies to each other such as spatial and temporal dependencies.
1 code implementation • 16 Sep 2021 • Minxing Zhang, Zhaochun Ren, Zihan Wang, Pengjie Ren, Zhumin Chen, Pengfei Hu, Yang Zhang
In this paper, we make the first attempt on quantifying the privacy leakage of recommender systems through the lens of membership inference.