Search Results for author: Xinlei He

Found 8 papers, 5 papers with code

Semi-Leak: Membership Inference Attacks Against Semi-supervised Learning

1 code implementation25 Jul 2022 Xinlei He, Hongbin Liu, Neil Zhenqiang Gong, Yang Zhang

The results show that early stopping can mitigate the membership inference attack, but with the cost of model's utility degradation.

Data Augmentation Inference Attack +1

SSLGuard: A Watermarking Scheme for Self-supervised Learning Pre-trained Encoders

1 code implementation27 Jan 2022 Tianshuo Cong, Xinlei He, Yang Zhang

Recent research has shown that the ML model's copyright is threatened by model stealing attacks, which aim to train a surrogate model to mimic the behavior of a given model.

Self-Supervised Learning

Can't Steal? Cont-Steal! Contrastive Stealing Attacks Against Image Encoders

no code implementations19 Jan 2022 Zeyang Sha, Xinlei He, Ning Yu, Michael Backes, Yang Zhang

Unsupervised representation learning techniques have been developing rapidly to make full use of unlabeled images.

Contrastive Learning Representation Learning

Model Stealing Attacks Against Inductive Graph Neural Networks

1 code implementation15 Dec 2021 Yun Shen, Xinlei He, Yufei Han, Yang Zhang

Graph neural networks (GNNs), a new family of machine learning (ML) models, have been proposed to fully leverage graph data to build powerful applications.

Node-Level Membership Inference Attacks Against Graph Neural Networks

no code implementations10 Feb 2021 Xinlei He, Rui Wen, Yixin Wu, Michael Backes, Yun Shen, Yang Zhang

To fully utilize the information contained in graph data, a new family of machine learning (ML) models, namely graph neural networks (GNNs), has been introduced.

BIG-bench Machine Learning

Quantifying and Mitigating Privacy Risks of Contrastive Learning

1 code implementation8 Feb 2021 Xinlei He, Yang Zhang

Our experimental results show that contrastive models trained on image datasets are less vulnerable to membership inference attacks but more vulnerable to attribute inference attacks compared to supervised models.

BIG-bench Machine Learning Contrastive Learning +3

ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models

1 code implementation4 Feb 2021 Yugeng Liu, Rui Wen, Xinlei He, Ahmed Salem, Zhikun Zhang, Michael Backes, Emiliano De Cristofaro, Mario Fritz, Yang Zhang

As a result, we lack a comprehensive picture of the risks caused by the attacks, e. g., the different scenarios they can be applied to, the common factors that influence their performance, the relationship among them, or the effectiveness of possible defenses.

BIG-bench Machine Learning Inference Attack +2

Stealing Links from Graph Neural Networks

no code implementations5 May 2020 Xinlei He, Jinyuan Jia, Michael Backes, Neil Zhenqiang Gong, Yang Zhang

In this work, we propose the first attacks to steal a graph from the outputs of a GNN model that is trained on the graph.

Fraud Detection Recommendation Systems

Cannot find the paper you are looking for? You can Submit a new open access paper.