Search Results for author: Dayong Ye

Found 12 papers, 1 papers with code

Data-Free Model-Related Attacks: Unleashing the Potential of Generative AI

no code implementations28 Jan 2025 Dayong Ye, Tianqing Zhu, Shang Wang, Bo Liu, Leo Yu Zhang, Wanlei Zhou, Yang Zhang

Generative AI technology has become increasingly integrated into our daily lives, offering powerful capabilities to enhance productivity.

Model extraction

Data Duplication: A Novel Multi-Purpose Attack Paradigm in Machine Unlearning

no code implementations28 Jan 2025 Dayong Ye, Tianqing Zhu, Jiayang Li, Kun Gao, Bo Liu, Leo Yu Zhang, Wanlei Zhou, Yang Zhang

For example, the adversary can challenge the model owner by revealing that, despite efforts to unlearn it, the influence of the duplicated subset remains in the model.

Machine Unlearning

When Machine Unlearning Meets Retrieval-Augmented Generation (RAG): Keep Secret or Forget Knowledge?

no code implementations20 Oct 2024 Shang Wang, Tianqing Zhu, Dayong Ye, Wanlei Zhou

While existing unlearning methods take into account the specific characteristics of LLMs, they often suffer from high computational demands, limited applicability, or the risk of catastrophic forgetting.

Machine Unlearning RAG +3

Reinforcement Unlearning

1 code implementation26 Dec 2023 Dayong Ye, Tianqing Zhu, Congcong Zhu, Derui Wang, Kun Gao, Zewei Shi, Sheng Shen, Wanlei Zhou, Minhui Xue

Machine unlearning refers to the process of mitigating the influence of specific training data on machine learning models based on removal requests from data owners.

Inference Attack Machine Unlearning +2

Boosting Model Inversion Attacks with Adversarial Examples

no code implementations24 Jun 2023 Shuai Zhou, Tianqing Zhu, Dayong Ye, Xin Yu, Wanlei Zhou

Hence, in this paper, we propose a new training paradigm for a learning-based model inversion attack that can achieve higher attack accuracy in a black-box setting.

model

One Parameter Defense -- Defending against Data Inference Attacks via Differential Privacy

no code implementations13 Mar 2022 Dayong Ye, Sheng Shen, Tianqing Zhu, Bo Liu, Wanlei Zhou

The experimental results show the method to be an effective and timely defense against both membership inference and model inversion attacks with no reduction in accuracy.

Label-only Model Inversion Attack: The Attack that Requires the Least Information

no code implementations13 Mar 2022 Dayong Ye, Tianqing Zhu, Shuai Zhou, Bo Liu, Wanlei Zhou

In launching a contemporary model inversion attack, the strategies discussed are generally based on either predicted confidence score vectors, i. e., black-box attacks, or the parameters of a target model, i. e., white-box attacks.

DP-Image: Differential Privacy for Image Data in Feature Space

no code implementations12 Mar 2021 Hanyu Xue, Bo Liu, Ming Ding, Tianqing Zhu, Dayong Ye, Li Song, Wanlei Zhou

The excessive use of images in social networks, government databases, and industrial applications has posed great privacy risks and raised serious concerns from the public.

Differentially Private Multi-Agent Planning for Logistic-like Problems

no code implementations16 Aug 2020 Dayong Ye, Tianqing Zhu, Sheng Shen, Wanlei Zhou, Philip S. Yu

To the best of our knowledge, this paper is the first to apply differential privacy to the field of multi-agent planning as a means of preserving the privacy of agents for logistic-like problems.

Privacy Preserving

Cannot find the paper you are looking for? You can Submit a new open access paper.