Search Results for author: Zhikun Zhang

Found 17 papers, 12 papers with code

ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models

1 code implementation4 Feb 2021 Yugeng Liu, Rui Wen, Xinlei He, Ahmed Salem, Zhikun Zhang, Michael Backes, Emiliano De Cristofaro, Mario Fritz, Yang Zhang

As a result, we lack a comprehensive picture of the risks caused by the attacks, e. g., the different scenarios they can be applied to, the common factors that influence their performance, the relationship among them, or the effectiveness of possible defenses.

Attribute BIG-bench Machine Learning +3

When Machine Unlearning Jeopardizes Privacy

1 code implementation5 May 2020 Min Chen, Zhikun Zhang, Tianhao Wang, Michael Backes, Mathias Humbert, Yang Zhang

More importantly, we show that our attack in multiple cases outperforms the classical membership inference attack on the original ML model, which indicates that machine unlearning can have counterproductive effects on privacy.

Inference Attack Machine Unlearning +1

Graph Unlearning

1 code implementation27 Mar 2021 Min Chen, Zhikun Zhang, Tianhao Wang, Michael Backes, Mathias Humbert, Yang Zhang

In this paper, we propose GraphEraser, a novel machine unlearning framework tailored to graph data.

Machine Unlearning

Inference Attacks Against Graph Neural Networks

1 code implementation6 Oct 2021 Zhikun Zhang, Min Chen, Michael Backes, Yun Shen, Yang Zhang

Second, given a subgraph of interest and the graph embedding, we can determine with high confidence that whether the subgraph is contained in the target graph.

Graph Classification Graph Embedding +2

On the Privacy Risks of Cell-Based NAS Architectures

1 code implementation4 Sep 2022 Hai Huang, Zhikun Zhang, Yun Shen, Michael Backes, Qi Li, Yang Zhang

Existing studies on neural architecture search (NAS) mainly focus on efficiently and effectively searching for network architectures with better performance.

Neural Architecture Search

DPMLBench: Holistic Evaluation of Differentially Private Machine Learning

1 code implementation10 May 2023 Chengkun Wei, Minghu Zhao, Zhikun Zhang, Min Chen, Wenlong Meng, Bo Liu, Yuan Fan, Wenzhi Chen

We also explore some improvements that can maintain model utility and defend against MIAs more effectively.

Image Classification

LMSanitator: Defending Prompt-Tuning Against Task-Agnostic Backdoors

1 code implementation26 Aug 2023 Chengkun Wei, Wenlong Meng, Zhikun Zhang, Min Chen, Minghu Zhao, Wenjing Fang, Lei Wang, Zihui Zhang, Wenzhi Chen

Instead of directly inverting the triggers, LMSanitator aims to invert the predefined attack vectors (pretrained models' output when the input is embedded with triggers) of the task-agnostic backdoors, which achieves much better convergence performance and backdoor detection accuracy.

FACE-AUDITOR: Data Auditing in Facial Recognition Systems

2 code implementations5 Apr 2023 Min Chen, Zhikun Zhang, Tianhao Wang, Michael Backes, Yang Zhang

Few-shot-based facial recognition systems have gained increasing attention due to their scalability and ability to work with a few face images during the model deployment phase.

Generated Graph Detection

1 code implementation13 Jun 2023 Yihan Ma, Zhikun Zhang, Ning Yu, Xinlei He, Michael Backes, Yun Shen, Yang Zhang

Graph generative models become increasingly effective for data distribution approximation and data augmentation.

Data Augmentation Face Swapping +1

ORL-AUDITOR: Dataset Auditing in Offline Deep Reinforcement Learning

1 code implementation6 Sep 2023 Linkang Du, Min Chen, Mingyang Sun, Shouling Ji, Peng Cheng, Jiming Chen, Zhikun Zhang

In safety-critical domains such as autonomous vehicles, offline deep reinforcement learning (offline DRL) is frequently used to train models on pre-collected datasets, as opposed to training these models by interacting with the real-world environment as the online DRL.

Autonomous Vehicles Offline RL +1

Privacy Analysis of Deep Learning in the Wild: Membership Inference Attacks against Transfer Learning

no code implementations10 Sep 2020 Yang Zou, Zhikun Zhang, Michael Backes, Yang Zhang

One major privacy attack in this domain is membership inference, where an adversary aims to determine whether a target data sample is part of the training set of a target ML model.

BIG-bench Machine Learning Transfer Learning

Finding MNEMON: Reviving Memories of Node Embeddings

no code implementations14 Apr 2022 Yun Shen, Yufei Han, Zhikun Zhang, Min Chen, Ting Yu, Michael Backes, Yang Zhang, Gianluca Stringhini

Previous security research efforts orbiting around graphs have been exclusively focusing on either (de-)anonymizing the graphs or understanding the security and privacy issues of graph neural networks.

Graph Embedding

Parameter Identification for Partial Differential Equations with Spatiotemporal Varying Coefficients

no code implementations30 Jun 2023 Guangtao Zhang, Yiting Duan, Guanyu Pan, Qijing Chen, Huiyu Yang, Zhikun Zhang

To comprehend complex systems with multiple states, it is imperative to reveal the identity of these states by system outputs.

FAKEPCD: Fake Point Cloud Detection via Source Attribution

no code implementations18 Dec 2023 Yiting Qu, Zhikun Zhang, Yun Shen, Michael Backes, Yang Zhang

Take the open-world attribution as an example, FAKEPCD attributes point clouds to known sources with an accuracy of 0. 82-0. 98 and to unknown sources with an accuracy of 0. 73-1. 00.

Attribute Cloud Detection

DPAdapter: Improving Differentially Private Deep Learning through Noise Tolerance Pre-training

no code implementations5 Mar 2024 ZiHao Wang, Rui Zhu, Dongruo Zhou, Zhikun Zhang, John Mitchell, Haixu Tang, XiaoFeng Wang

DPAdapter modifies and enhances the sharpness-aware minimization (SAM) technique, utilizing a two-batch strategy to provide a more accurate perturbation estimate and an efficient gradient descent, thereby improving parameter robustness against noise.

Cannot find the paper you are looking for? You can Submit a new open access paper.