no code implementations • 28 Jan 2025 • Dayong Ye, Tianqing Zhu, Shang Wang, Bo Liu, Leo Yu Zhang, Wanlei Zhou, Yang Zhang
Generative AI technology has become increasingly integrated into our daily lives, offering powerful capabilities to enhance productivity.
no code implementations • 28 Jan 2025 • Dayong Ye, Tianqing Zhu, Jiayang Li, Kun Gao, Bo Liu, Leo Yu Zhang, Wanlei Zhou, Yang Zhang
For example, the adversary can challenge the model owner by revealing that, despite efforts to unlearn it, the influence of the duplicated subset remains in the model.
no code implementations • 20 Oct 2024 • Shang Wang, Tianqing Zhu, Dayong Ye, Wanlei Zhou
While existing unlearning methods take into account the specific characteristics of LLMs, they often suffer from high computational demands, limited applicability, or the risk of catastrophic forgetting.
1 code implementation • 26 Dec 2023 • Dayong Ye, Tianqing Zhu, Congcong Zhu, Derui Wang, Kun Gao, Zewei Shi, Sheng Shen, Wanlei Zhou, Minhui Xue
Machine unlearning refers to the process of mitigating the influence of specific training data on machine learning models based on removal requests from data owners.
no code implementations • 24 Jun 2023 • Shuai Zhou, Tianqing Zhu, Dayong Ye, Xin Yu, Wanlei Zhou
Hence, in this paper, we propose a new training paradigm for a learning-based model inversion attack that can achieve higher attack accuracy in a black-box setting.
no code implementations • 31 Dec 2022 • Yunjiao Lei, Dayong Ye, Sheng Shen, Yulei Sui, Tianqing Zhu, Wanlei Zhou
A large number of studies have focused on these security and privacy problems in reinforcement learning.
no code implementations • 13 Mar 2022 • Dayong Ye, Sheng Shen, Tianqing Zhu, Bo Liu, Wanlei Zhou
The experimental results show the method to be an effective and timely defense against both membership inference and model inversion attacks with no reduction in accuracy.
no code implementations • 13 Mar 2022 • Dayong Ye, Huiqiang Chen, Shuai Zhou, Tianqing Zhu, Wanlei Zhou, Shouling Ji
However, they may not mean that transfer learning models are impervious to model inversion attacks.
no code implementations • 13 Mar 2022 • Dayong Ye, Tianqing Zhu, Shuai Zhou, Bo Liu, Wanlei Zhou
In launching a contemporary model inversion attack, the strategies discussed are generally based on either predicted confidence score vectors, i. e., black-box attacks, or the parameters of a target model, i. e., white-box attacks.
no code implementations • 12 Mar 2021 • Hanyu Xue, Bo Liu, Ming Ding, Tianqing Zhu, Dayong Ye, Li Song, Wanlei Zhou
The excessive use of images in social networks, government databases, and industrial applications has posed great privacy risks and raised serious concerns from the public.
no code implementations • 16 Aug 2020 • Dayong Ye, Tianqing Zhu, Sheng Shen, Wanlei Zhou, Philip S. Yu
To the best of our knowledge, this paper is the first to apply differential privacy to the field of multi-agent planning as a means of preserving the privacy of agents for logistic-like problems.
no code implementations • 5 Aug 2020 • Tianqing Zhu, Dayong Ye, Wei Wang, Wanlei Zhou, Philip S. Yu
Artificial Intelligence (AI) has attracted a great deal of attention in recent years.