Search Results for author: Zhikun Zhang

Found 7 papers, 3 papers with code

Finding MNEMON: Reviving Memories of Node Embeddings

no code implementations14 Apr 2022 Yun Shen, Yufei Han, Zhikun Zhang, Min Chen, Ting Yu, Michael Backes, Yang Zhang, Gianluca Stringhini

Previous security research efforts orbiting around graphs have been exclusively focusing on either (de-)anonymizing the graphs or understanding the security and privacy issues of graph neural networks.

Graph Embedding

Inference Attacks Against Graph Neural Networks

1 code implementation6 Oct 2021 Zhikun Zhang, Min Chen, Michael Backes, Yun Shen, Yang Zhang

Second, given a subgraph of interest and the graph embedding, we can determine with high confidence that whether the subgraph is contained in the target graph.

Graph Classification Graph Embedding +1

Graph Unlearning

no code implementations27 Mar 2021 Min Chen, Zhikun Zhang, Tianhao Wang, Michael Backes, Mathias Humbert, Yang Zhang

In the context of machine learning (ML), it requires the ML model provider to remove the data subject's data from the training set used to build the ML model, a process known as \textit{machine unlearning}.

ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models

1 code implementation4 Feb 2021 Yugeng Liu, Rui Wen, Xinlei He, Ahmed Salem, Zhikun Zhang, Michael Backes, Emiliano De Cristofaro, Mario Fritz, Yang Zhang

As a result, we lack a comprehensive picture of the risks caused by the attacks, e. g., the different scenarios they can be applied to, the common factors that influence their performance, the relationship among them, or the effectiveness of possible defenses.

Inference Attack Knowledge Distillation +1

Privacy Analysis of Deep Learning in the Wild: Membership Inference Attacks against Transfer Learning

no code implementations10 Sep 2020 Yang Zou, Zhikun Zhang, Michael Backes, Yang Zhang

One major privacy attack in this domain is membership inference, where an adversary aims to determine whether a target data sample is part of the training set of a target ML model.

Transfer Learning

When Machine Unlearning Jeopardizes Privacy

1 code implementation5 May 2020 Min Chen, Zhikun Zhang, Tianhao Wang, Michael Backes, Mathias Humbert, Yang Zhang

More importantly, we show that our attack in multiple cases outperforms the classical membership inference attack on the original ML model, which indicates that machine unlearning can have counterproductive effects on privacy.

Inference Attack Membership Inference Attack

Cannot find the paper you are looking for? You can Submit a new open access paper.