no code implementations • 13 Jun 2025 • LinLin Wang, Tianqing Zhu, Laiqiao Qin, Longxiang Gao, Wanlei Zhou
To show the impact of the bias, this paper proposes a Bias Retrieval and Reward Attack (BRRA) framework, which systematically investigates attack pathways that amplify language model biases through a RAG system manipulation.
no code implementations • 27 May 2025 • Yufeng Wang, Yiguang Bai, Tianqing Zhu, Ismail Ben Ayed, Jing Yuan
Community partitioning is crucial in network analysis, with modularity optimization being the prevailing technique.
no code implementations • 23 May 2025 • Wenhan Chang, Tianqing Zhu, Yu Zhao, Shuangyong Song, Ping Xiong, Wanlei Zhou, Yongxiang Li
In the era of rapid generative AI development, interactions between humans and large language models face significant misusing risks.
no code implementations • 8 Mar 2025 • Huan Tian, Guangsheng Zhang, Bo Liu, Tianqing Zhu, Ming Ding, Wanlei Zhou
While in-processing fairness approaches show promise in mitigating biased predictions, their potential impact on privacy leakage remains under-explored.
no code implementations • 28 Jan 2025 • Dayong Ye, Tianqing Zhu, Jiayang Li, Kun Gao, Bo Liu, Leo Yu Zhang, Wanlei Zhou, Yang Zhang
For example, the adversary can challenge the model owner by revealing that, despite efforts to unlearn it, the influence of the duplicated subset remains in the model.
no code implementations • 28 Jan 2025 • Dayong Ye, Tianqing Zhu, Shang Wang, Bo Liu, Leo Yu Zhang, Wanlei Zhou, Yang Zhang
Generative AI technology has become increasingly integrated into our daily lives, offering powerful capabilities to enhance productivity.
no code implementations • 6 Jan 2025 • Huiqiang Chen, Tianqing Zhu, Wanlei Zhou, Wei Zhao
Federated Learning (FL) has gained significant attention as it facilitates collaborative machine learning among multiple clients without centralizing their data on a server.
1 code implementation • 16 Dec 2024 • Mengde Han, Tianqing Zhu, Lefeng Zhang, Huan Huo, Wanlei Zhou
We introduce an innovative modification to traditional VFL by employing a mechanism that inverts the typical learning trajectory with the objective of extracting specific data contributions.
no code implementations • 8 Dec 2024 • Faqian Guan, Tianqing Zhu, Wenhan Chang, Wei Ren, Wanlei Zhou
However, we find that an attacker can combine the data knowledge of multiple attackers to create a more effective attack model, which can be referred to cross-dataset attacks.
no code implementations • 13 Nov 2024 • Laiqiao Qin, Tianqing Zhu, LinLin Wang, Wanlei Zhou
Machine unlearning is new emerged technology that removes a subset of the training data from a trained model without affecting the model performance on the remaining data.
no code implementations • 12 Nov 2024 • Meng Yang, Tianqing Zhu, Chi Liu, Wanlei Zhou, Shui Yu, Philip S. Yu
With the taxonomy analysis, we capture the unique security and privacy issues of pre-trained models, categorizing and summarizing existing security issues based on their characteristics.
no code implementations • 6 Nov 2024 • Hengzhu Liu, Tianqing Zhu, Lefeng Zhang, Ping Xiong
Additionally, the experimental results on real-world datasets demonstrate that this game-theoretic unlearning algorithm's effectiveness and its ability to generate an unlearned model with a performance similar to that of the retrained one while mitigating extra privacy leakage risks.
no code implementations • 31 Oct 2024 • Wenhan Chang, Tianqing Zhu, Yufeng Wu, Wanlei Zhou
However, it faces several key challenges, including accurately implementing unlearning, ensuring privacy protection during the unlearning process, and achieving effective unlearning without significantly compromising model performance.
no code implementations • 20 Oct 2024 • Shang Wang, Tianqing Zhu, Dayong Ye, Wanlei Zhou
While existing unlearning methods take into account the specific characteristics of LLMs, they often suffer from high computational demands, limited applicability, or the risk of catastrophic forgetting.
no code implementations • 22 Aug 2024 • Duoxun Tang, Yuxin Cao, Xi Xiao, Derui Wang, Sheng Wen, Tianqing Zhu
Therefore, to generate adversarial examples with a low budget and to provide them with a higher verisimilitude, we propose a novel black-box video attack framework, called Stylized Logo Attack (SLA).
1 code implementation • 24 Jul 2024 • Xiaobiao Du, Haiyang Sun, Ming Lu, Tianqing Zhu, Xin Yu
With this dataset, we make the generative model more robust to cars.
no code implementations • 1 Jul 2024 • Huajie Chen, Tianqing Zhu, Lefeng Zhang, Bo Liu, Derui Wang, Wanlei Zhou, Minhui Xue
To limit the potential threat, QUEEN has sensitivity measurement and outputs perturbation that prevents the adversary from training a piracy model with high performance.
no code implementations • 22 Jun 2024 • Faqian Guan, Tianqing Zhu, Hui Sun, Wanlei Zhou, Philip S. Yu
The handling of these varying data dimensions posed a challenge in using a single model to effectively conduct link stealing attacks on different datasets.
no code implementations • 18 Jun 2024 • Heng Xu, Tianqing Zhu, Lefeng Zhang, Wanlei Zhou, Philip S. Yu
However, training data cannot be accessed on the server under the federated learning paradigm, conflicting with the requirements of the centralized unlearning process.
1 code implementation • 16 Jun 2024 • Heng Xu, Tianqing Zhu, Lefeng Zhang, Wanlei Zhou, Wei Zhao
After that, we simultaneously filter parameters that are also important for the remaining targets and use the pruning-based unlearning method, which is a simple but effective solution to remove information about the target that needs to be forgotten.
no code implementations • 16 Jun 2024 • Laiqiao Qin, Tianqing Zhu, Wanlei Zhou, Philip S. Yu
We discuss how KD can address the challenges in FL, including privacy protection, data heterogeneity, communication efficiency, and personalization.
no code implementations • 16 Jun 2024 • LinLin Wang, Tianqing Zhu, Wanlei Zhou, Philip S. Yu
Building upon our observations, we identify the trade-offs between privacy and fairness and between security and fairness within the context of federated learning.
no code implementations • 14 Jun 2024 • Youyang Qu, Ming Liu, Tianqing Zhu, Longxiang Gao, Shui Yu, Wanlei Zhou
Federated Learning (FL) offers a promising paradigm for training Large Language Models (LLMs) in a decentralized manner while preserving data privacy and minimizing communication overhead.
no code implementations • 7 Jun 2024 • Xiaobiao Du, Haiyang Sun, Shuyun Wang, Zhuojie Wu, Hongwei Sheng, Jiaying Ying, Ming Lu, Tianqing Zhu, Kun Zhan, Xin Yu
(1) \textbf{High-Volume}: 2, 500 cars are meticulously scanned by 3D scanners, obtaining car images and point clouds with real-world dimensions; (2) \textbf{High-Quality}: Each car is captured in an average of 200 dense, high-resolution 360-degree RGB-D views, enabling high-fidelity 3D reconstruction; (3) \textbf{High-Diversity}: The dataset contains various cars from over 100 brands, collected under three distinct lighting conditions, including reflective, standard, and dark.
no code implementations • 27 May 2024 • Xuhan Zuo, Minghao Wang, Tianqing Zhu, Lefeng Zhang, Shui Yu, Wanlei Zhou
With the growing need to comply with privacy regulations and respond to user data deletion requests, integrating machine unlearning into IoT-based federated learning has become imperative.
no code implementations • 24 May 2024 • Wenhan Chang, Tianqing Zhu, Heng Xu, Wenjian Liu, Wanlei Zhou
In this paper, to accurately defining the unlearning class of complex data, we apply the definition of Concept, rather than an image feature or a token of text data, to represent the semantic information of unlearning class.
1 code implementation • 21 Apr 2024 • Huiqiang Chen, Tianqing Zhu, Xin Yu, Wanlei Zhou
Current research centres on efficient unlearning to erase the influence of data from the model and neglects the subsequent impacts on the remaining data.
no code implementations • 23 Mar 2024 • Youyang Qu, Ming Ding, Nan Sun, Kanchana Thilakarathna, Tianqing Zhu, Dusit Niyato
Large Language Models (LLMs) are foundational to AI advancements, facilitating applications like predictive text generation.
1 code implementation • 19 Feb 2024 • Jiyao Li, Mingze Ni, Yifei Dong, Tianqing Zhu, Wei Liu
This paper presents a novel adversarial attack strategy, AICAttack (Attention-based Image Captioning Attack), designed to attack image captioning models through subtle perturbations on images.
1 code implementation • 26 Dec 2023 • Dayong Ye, Tianqing Zhu, Congcong Zhu, Derui Wang, Kun Gao, Zewei Shi, Sheng Shen, Wanlei Zhou, Minhui Xue
Machine unlearning refers to the process of mitigating the influence of specific training data on machine learning models based on removal requests from data owners.
no code implementations • 7 Nov 2023 • Huan Tian, Guangsheng Zhang, Bo Liu, Tianqing Zhu, Ming Ding, Wanlei Zhou
It leverages the difference in the predictions from both the original and fairness-enhanced models and exploits the observed prediction gaps as attack clues.
no code implementations • 9 Oct 2023 • Hu Zhang, Xin Shen, Heming Du, Huiqiang Chen, Chen Liu, Hongwei Sheng, Qingzheng Xu, MD Wahiduzzaman Khan, Qingtao Yu, Tianqing Zhu, Scott Chapman, Zi Huang, Xin Yu
In the wheat nutrient deficiencies classification challenge, we present the DividE and EnseMble (DEEM) method for progressive test data predictions.
no code implementations • 19 Aug 2023 • Hui Sun, Tianqing Zhu, Wenhan Chang, Wanlei Zhou
Based on the substitution mechanism and fake label, we propose a cascaded unlearning approach for both item and class unlearning within GAN models, in which the unlearning and learning processes run in a cascaded manner.
no code implementations • 25 Jun 2023 • Huiqiang Chen, Tianqing Zhu, Tao Zhang, Wanlei Zhou, Philip S. Yu
Federated learning (FL) has been a hot topic in recent years.
no code implementations • 24 Jun 2023 • Shuai Zhou, Tianqing Zhu, Dayong Ye, Xin Yu, Wanlei Zhou
Hence, in this paper, we propose a new training paradigm for a learning-based model inversion attack that can achieve higher attack accuracy in a black-box setting.
no code implementations • 6 Jun 2023 • Heng Xu, Tianqing Zhu, Lefeng Zhang, Wanlei Zhou, Philip S. Yu
Machine learning has attracted widespread attention and evolved into an enabling technology for a wide range of highly successful applications, such as intelligent computer vision, speech recognition, medical diagnosis, and more.
no code implementations • 2 Jun 2023 • Chi Liu, Tianqing Zhu, Sheng Shen, Wanlei Zhou
GAN-generated image detection now becomes the first line of defense against the malicious uses of machine-synthesized image manipulations such as deepfakes.
no code implementations • 23 Mar 2023 • Huajie Chen, Tianqing Zhu, Yuan Zhao, Bo Liu, Xin Yu, Wanlei Zhou
By avoiding high-frequency artifacts and manipulating the frequency distribution of the embedded feature map, LIDS achieves improved robustness against attacks that distort the high-frequency components of container images.
no code implementations • 31 Dec 2022 • Yunjiao Lei, Dayong Ye, Sheng Shen, Yulei Sui, Tianqing Zhu, Wanlei Zhou
A large number of studies have focused on these security and privacy problems in reinforcement learning.
no code implementations • 20 Oct 2022 • Guangsheng Zhang, Bo Liu, Huan Tian, Tianqing Zhu, Ming Ding, Wanlei Zhou
As a booming research area in the past decade, deep learning technologies have been driven by big data collected and processed on an unprecedented scale.
no code implementations • 28 Sep 2022 • Mengde Han, Tianqing Zhu, Wanlei Zhou
The major challenge is to find a way to guarantee that sensitive personal information is not disclosed while data is published and analyzed.
1 code implementation • 10 Apr 2022 • Yuexin Xiang, Yuchen Lei, Ding Bao, Wei Ren, Tiantian Li, Qingqing Yang, Wenmao Liu, Tianqing Zhu, Kim-Kwang Raymond Choo
Cryptocurrencies are no longer just the preferred option for cybercriminal activities on darknets, due to the increasing adoption in mainstream applications.
no code implementations • 22 Mar 2022 • Chi Liu, Huajie Chen, Tianqing Zhu, Jun Zhang, Wanlei Zhou
To evaluate the attack efficacy, we crafted heterogeneous security scenarios where the detectors were embedded with different levels of defense and the attackers' background knowledge of data varies.
no code implementations • 13 Mar 2022 • Dayong Ye, Sheng Shen, Tianqing Zhu, Bo Liu, Wanlei Zhou
The experimental results show the method to be an effective and timely defense against both membership inference and model inversion attacks with no reduction in accuracy.
no code implementations • 13 Mar 2022 • Dayong Ye, Tianqing Zhu, Shuai Zhou, Bo Liu, Wanlei Zhou
In launching a contemporary model inversion attack, the strategies discussed are generally based on either predicted confidence score vectors, i. e., black-box attacks, or the parameters of a target model, i. e., white-box attacks.
no code implementations • 13 Mar 2022 • Dayong Ye, Huiqiang Chen, Shuai Zhou, Tianqing Zhu, Wanlei Zhou, Shouling Ji
However, they may not mean that transfer learning models are impervious to model inversion attacks.
1 code implementation • 19 May 2021 • Yuexin Xiang, Tiantian Li, Wei Ren, Tianqing Zhu, Kim-Kwang Raymond Choo
Experimental findings on the testing set show that our scheme preserves image privacy while maintaining the availability of the training set in the deep learning models.
no code implementations • 12 Mar 2021 • Hanyu Xue, Bo Liu, Ming Ding, Tianqing Zhu, Dayong Ye, Li Song, Wanlei Zhou
The excessive use of images in social networks, government databases, and industrial applications has posed great privacy risks and raised serious concerns from the public.
no code implementations • 19 Oct 2020 • Sheng Shen, Tianqing Zhu, Di wu, Wei Wang, Wanlei Zhou
Federated learning is an improved version of distributed machine learning that further offloads operations which would usually be performed by a central server.
Distributed, Parallel, and Cluster Computing
no code implementations • 7 Oct 2020 • Tao Zhang, Tianqing Zhu, Ping Xiong, Huan Huo, Zahir Tari, Wanlei Zhou
In this way, the impact of data correlation is relieved with the proposed feature selection scheme, and moreover, the privacy issue of data correlation in learning is guaranteed.
no code implementations • 25 Sep 2020 • Tao Zhang, Tianqing Zhu, Jing Li, Mengde Han, Wanlei Zhou, Philip S. Yu
A set of experiments on real-world and synthetic datasets show that our method is able to use unlabeled data to achieve a better trade-off between accuracy and discrimination.
no code implementations • 14 Sep 2020 • Tao Zhang, Tianqing Zhu, Mengde Han, Jing Li, Wanlei Zhou, Philip S. Yu
Extensive experiments show that our method is able to achieve fair semi-supervised learning, and reach a better trade-off between accuracy and fairness than fair supervised learning.
no code implementations • 16 Aug 2020 • Dayong Ye, Tianqing Zhu, Sheng Shen, Wanlei Zhou, Philip S. Yu
To the best of our knowledge, this paper is the first to apply differential privacy to the field of multi-agent planning as a means of preserving the privacy of agents for logistic-like problems.
2 code implementations • 14 Aug 2020 • Yuexin Xiang, Tiantian Li, Wei Ren, Tianqing Zhu, Kim-Kwang Raymond Choo
We devise an efficient mechanism to select host images and watermark images and utilize the improved discrete wavelet transform (DWT) based Patchwork watermarking algorithm with a set of valid hyperparameters to embed digital watermarks from the watermark image dataset into original images for generating image adversarial examples.
no code implementations • 9 Aug 2020 • Mengmeng Yang, Lingjuan Lyu, Jun Zhao, Tianqing Zhu, Kwok-Yan Lam
Local differential privacy (LDP), as a strong privacy tool, has been widely deployed in the real world in recent years.
Cryptography and Security
no code implementations • 5 Aug 2020 • Tianqing Zhu, Dayong Ye, Wei Wang, Wanlei Zhou, Philip S. Yu
Artificial Intelligence (AI) has attracted a great deal of attention in recent years.