Search Results for author: Tianqing Zhu

Found 56 papers, 9 papers with code

Bias Amplification in RAG: Poisoning Knowledge Retrieval to Steer LLMs

no code implementations13 Jun 2025 LinLin Wang, Tianqing Zhu, Laiqiao Qin, Longxiang Gao, Wanlei Zhou

To show the impact of the bias, this paper proposes a Bias Retrieval and Reward Attack (BRRA) framework, which systematically investigates attack pathways that amplify language model biases through a RAG system manipulation.

Fairness RAG +2

Recovering Fairness Directly from Modularity: a New Way for Fair Community Partitioning

no code implementations27 May 2025 Yufeng Wang, Yiguang Bai, Tianqing Zhu, Ismail Ben Ayed, Jing Yuan

Community partitioning is crucial in network analysis, with modularity optimization being the prevailing technique.

Fairness

Chain-of-Lure: A Synthetic Narrative-Driven Approach to Compromise Large Language Models

no code implementations23 May 2025 Wenhan Chang, Tianqing Zhu, Yu Zhao, Shuangyong Song, Ping Xiong, Wanlei Zhou, Yongxiang Li

In the era of rapid generative AI development, interactions between humans and large language models face significant misusing risks.

Do Fairness Interventions Come at the Cost of Privacy: Evaluations for Binary Classifiers

no code implementations8 Mar 2025 Huan Tian, Guangsheng Zhang, Bo Liu, Tianqing Zhu, Ming Ding, Wanlei Zhou

While in-processing fairness approaches show promise in mitigating biased predictions, their potential impact on privacy leakage remains under-explored.

Attribute Fairness

Data Duplication: A Novel Multi-Purpose Attack Paradigm in Machine Unlearning

no code implementations28 Jan 2025 Dayong Ye, Tianqing Zhu, Jiayang Li, Kun Gao, Bo Liu, Leo Yu Zhang, Wanlei Zhou, Yang Zhang

For example, the adversary can challenge the model owner by revealing that, despite efforts to unlearn it, the influence of the duplicated subset remains in the model.

Machine Unlearning

Data-Free Model-Related Attacks: Unleashing the Potential of Generative AI

no code implementations28 Jan 2025 Dayong Ye, Tianqing Zhu, Shang Wang, Bo Liu, Leo Yu Zhang, Wanlei Zhou, Yang Zhang

Generative AI technology has become increasingly integrated into our daily lives, offering powerful capabilities to enhance productivity.

Model extraction

AFed: Algorithmic Fair Federated Learning

no code implementations6 Jan 2025 Huiqiang Chen, Tianqing Zhu, Wanlei Zhou, Wei Zhao

Federated Learning (FL) has gained significant attention as it facilitates collaborative machine learning among multiple clients without centralizing their data on a server.

Fairness Federated Learning

Vertical Federated Unlearning via Backdoor Certification

1 code implementation16 Dec 2024 Mengde Han, Tianqing Zhu, Lefeng Zhang, Huan Huo, Wanlei Zhou

We introduce an innovative modification to traditional VFL by employing a mechanism that inverts the typical learning trajectory with the objective of extracting specific data contributions.

Vertical Federated Learning

Large Language Models Merging for Enhancing the Link Stealing Attack on Graph Neural Networks

no code implementations8 Dec 2024 Faqian Guan, Tianqing Zhu, Wenhan Chang, Wei Ren, Wanlei Zhou

However, we find that an attacker can combine the data knowledge of multiple attackers to create a more effective attack model, which can be referred to cross-dataset attacks.

Machine Unlearning on Pre-trained Models by Residual Feature Alignment Using LoRA

no code implementations13 Nov 2024 Laiqiao Qin, Tianqing Zhu, LinLin Wang, Wanlei Zhou

Machine unlearning is new emerged technology that removes a subset of the training data from a trained model without affecting the model performance on the remaining data.

Machine Unlearning

New Emerged Security and Privacy of Pre-trained Model: a Survey and Outlook

no code implementations12 Nov 2024 Meng Yang, Tianqing Zhu, Chi Liu, Wanlei Zhou, Shui Yu, Philip S. Yu

With the taxonomy analysis, we capture the unique security and privacy issues of pre-trained models, categorizing and summarizing existing security issues based on their characteristics.

Game-Theoretic Machine Unlearning: Mitigating Extra Privacy Leakage

no code implementations6 Nov 2024 Hengzhu Liu, Tianqing Zhu, Lefeng Zhang, Ping Xiong

Additionally, the experimental results on real-world datasets demonstrate that this game-theoretic unlearning algorithm's effectiveness and its ability to generate an unlearned model with a performance similar to that of the retrained one while mitigating extra privacy leakage risks.

Machine Unlearning

Zero-shot Class Unlearning via Layer-wise Relevance Analysis and Neuronal Path Perturbation

no code implementations31 Oct 2024 Wenhan Chang, Tianqing Zhu, Yufeng Wu, Wanlei Zhou

However, it faces several key challenges, including accurately implementing unlearning, ensuring privacy protection during the unlearning process, and achieving effective unlearning without significantly compromising model performance.

Machine Unlearning Privacy Preserving

When Machine Unlearning Meets Retrieval-Augmented Generation (RAG): Keep Secret or Forget Knowledge?

no code implementations20 Oct 2024 Shang Wang, Tianqing Zhu, Dayong Ye, Wanlei Zhou

While existing unlearning methods take into account the specific characteristics of LLMs, they often suffer from high computational demands, limited applicability, or the risk of catastrophic forgetting.

Machine Unlearning RAG +3

Query-Efficient Video Adversarial Attack with Stylized Logo

no code implementations22 Aug 2024 Duoxun Tang, Yuxin Cao, Xi Xiao, Derui Wang, Sheng Wen, Tianqing Zhu

Therefore, to generate adversarial examples with a low budget and to provide them with a higher verisimilitude, we propose a novel black-box video attack framework, called Stylized Logo Attack (SLA).

Adversarial Attack Reinforcement Learning (RL) +2

QUEEN: Query Unlearning against Model Extraction

no code implementations1 Jul 2024 Huajie Chen, Tianqing Zhu, Lefeng Zhang, Bo Liu, Derui Wang, Wanlei Zhou, Minhui Xue

To limit the potential threat, QUEEN has sensitivity measurement and outputs perturbation that prevents the adversary from training a piracy model with high performance.

model Model extraction

Large Language Models for Link Stealing Attacks Against Graph Neural Networks

no code implementations22 Jun 2024 Faqian Guan, Tianqing Zhu, Hui Sun, Wanlei Zhou, Philip S. Yu

The handling of these varying data dimensions posed a challenge in using a single model to effectively conduct link stealing attacks on different datasets.

Recommendation Systems

Update Selective Parameters: Federated Machine Unlearning Based on Model Explanation

no code implementations18 Jun 2024 Heng Xu, Tianqing Zhu, Lefeng Zhang, Wanlei Zhou, Philip S. Yu

However, training data cannot be accessed on the server under the federated learning paradigm, conflicting with the requirements of the centralized unlearning process.

Federated Learning Machine Unlearning +1

Towards Efficient Target-Level Machine Unlearning Based on Essential Graph

1 code implementation16 Jun 2024 Heng Xu, Tianqing Zhu, Lefeng Zhang, Wanlei Zhou, Wei Zhao

After that, we simultaneously filter parameters that are also important for the remaining targets and use the pruning-based unlearning method, which is a simple but effective solution to remove information about the target that needs to be forgotten.

Machine Unlearning

Knowledge Distillation in Federated Learning: a Survey on Long Lasting Challenges and New Solutions

no code implementations16 Jun 2024 Laiqiao Qin, Tianqing Zhu, Wanlei Zhou, Philip S. Yu

We discuss how KD can address the challenges in FL, including privacy protection, data heterogeneity, communication efficiency, and personalization.

Federated Learning Knowledge Distillation +3

Linkage on Security, Privacy and Fairness in Federated Learning: New Balances and New Perspectives

no code implementations16 Jun 2024 LinLin Wang, Tianqing Zhu, Wanlei Zhou, Philip S. Yu

Building upon our observations, we identify the trade-offs between privacy and fairness and between security and fairness within the context of federated learning.

Fairness Federated Learning

Recent Advances in Federated Learning Driven Large Language Models: A Survey on Architecture, Performance, and Security

no code implementations14 Jun 2024 Youyang Qu, Ming Liu, Tianqing Zhu, Longxiang Gao, Shui Yu, Wanlei Zhou

Federated Learning (FL) offers a promising paradigm for training Large Language Models (LLMs) in a decentralized manner while preserving data privacy and minimizing communication overhead.

Ethics Federated Learning +3

3DRealCar: An In-the-wild RGB-D Car Dataset with 360-degree Views

no code implementations7 Jun 2024 Xiaobiao Du, Haiyang Sun, Shuyun Wang, Zhuojie Wu, Hongwei Sheng, Jiaying Ying, Ming Lu, Tianqing Zhu, Kun Zhan, Xin Yu

(1) \textbf{High-Volume}: 2, 500 cars are meticulously scanned by 3D scanners, obtaining car images and point clouds with real-world dimensions; (2) \textbf{High-Quality}: Each car is captured in an average of 200 dense, high-resolution 360-degree RGB-D views, enabling high-fidelity 3D reconstruction; (3) \textbf{High-Diversity}: The dataset contains various cars from over 100 brands, collected under three distinct lighting conditions, including reflective, standard, and dark.

3D Reconstruction

Federated Learning with Blockchain-Enhanced Machine Unlearning: A Trustworthy Approach

no code implementations27 May 2024 Xuhan Zuo, Minghao Wang, Tianqing Zhu, Lefeng Zhang, Shui Yu, Wanlei Zhou

With the growing need to comply with privacy regulations and respond to user data deletion requests, integrating machine unlearning into IoT-based federated learning has become imperative.

Federated Learning Machine Unlearning +1

Class Machine Unlearning for Complex Data via Concepts Inference and Data Poisoning

no code implementations24 May 2024 Wenhan Chang, Tianqing Zhu, Heng Xu, Wenjian Liu, Wanlei Zhou

In this paper, to accurately defining the unlearning class of complex data, we apply the definition of Concept, rather than an image feature or a token of text data, to represent the semantic information of unlearning class.

Data Poisoning image-classification +2

Machine Unlearning via Null Space Calibration

1 code implementation21 Apr 2024 Huiqiang Chen, Tianqing Zhu, Xin Yu, Wanlei Zhou

Current research centres on efficient unlearning to erase the influence of data from the model and neglects the subsequent impacts on the remaining data.

Machine Unlearning

The Frontier of Data Erasure: Machine Unlearning for Large Language Models

no code implementations23 Mar 2024 Youyang Qu, Ming Ding, Nan Sun, Kanchana Thilakarathna, Tianqing Zhu, Dusit Niyato

Large Language Models (LLMs) are foundational to AI advancements, facilitating applications like predictive text generation.

Machine Unlearning Text Generation

AICAttack: Adversarial Image Captioning Attack with Attention-Based Optimization

1 code implementation19 Feb 2024 Jiyao Li, Mingze Ni, Yifei Dong, Tianqing Zhu, Wei Liu

This paper presents a novel adversarial attack strategy, AICAttack (Attention-based Image Captioning Attack), designed to attack image captioning models through subtle perturbations on images.

Adversarial Attack Image Captioning

Reinforcement Unlearning

1 code implementation26 Dec 2023 Dayong Ye, Tianqing Zhu, Congcong Zhu, Derui Wang, Kun Gao, Zewei Shi, Sheng Shen, Wanlei Zhou, Minhui Xue

Machine unlearning refers to the process of mitigating the influence of specific training data on machine learning models based on removal requests from data owners.

Inference Attack Machine Unlearning +2

When Fairness Meets Privacy: Exploring Privacy Threats in Fair Binary Classifiers via Membership Inference Attacks

no code implementations7 Nov 2023 Huan Tian, Guangsheng Zhang, Bo Liu, Tianqing Zhu, Ming Ding, Wanlei Zhou

It leverages the difference in the predictions from both the original and fairness-enhanced models and exploits the observed prediction gaps as attack clues.

Fairness Prediction

Divide and Ensemble: Progressively Learning for the Unknown

no code implementations9 Oct 2023 Hu Zhang, Xin Shen, Heming Du, Huiqiang Chen, Chen Liu, Hongwei Sheng, Qingzheng Xu, MD Wahiduzzaman Khan, Qingtao Yu, Tianqing Zhu, Scott Chapman, Zi Huang, Xin Yu

In the wheat nutrient deficiencies classification challenge, we present the DividE and EnseMble (DEEM) method for progressive test data predictions.

Generative Adversarial Networks Unlearning

no code implementations19 Aug 2023 Hui Sun, Tianqing Zhu, Wenhan Chang, Wanlei Zhou

Based on the substitution mechanism and fake label, we propose a cascaded unlearning approach for both item and class unlearning within GAN models, in which the unlearning and learning processes run in a cascaded manner.

Machine Unlearning

Boosting Model Inversion Attacks with Adversarial Examples

no code implementations24 Jun 2023 Shuai Zhou, Tianqing Zhu, Dayong Ye, Xin Yu, Wanlei Zhou

Hence, in this paper, we propose a new training paradigm for a learning-based model inversion attack that can achieve higher attack accuracy in a black-box setting.

model

Machine Unlearning: A Survey

no code implementations6 Jun 2023 Heng Xu, Tianqing Zhu, Lefeng Zhang, Wanlei Zhou, Philip S. Yu

Machine learning has attracted widespread attention and evolved into an enabling technology for a wide range of highly successful applications, such as intelligent computer vision, speech recognition, medical diagnosis, and more.

Machine Unlearning Medical Diagnosis +3

Towards Robust GAN-generated Image Detection: a Multi-view Completion Representation

no code implementations2 Jun 2023 Chi Liu, Tianqing Zhu, Sheng Shen, Wanlei Zhou

GAN-generated image detection now becomes the first line of defense against the malicious uses of machine-synthesized image manipulations such as deepfakes.

Low-frequency Image Deep Steganography: Manipulate the Frequency Distribution to Hide Secrets with Tenacious Robustness

no code implementations23 Mar 2023 Huajie Chen, Tianqing Zhu, Yuan Zhao, Bo Liu, Xin Yu, Wanlei Zhou

By avoiding high-frequency artifacts and manipulating the frequency distribution of the embedded feature map, LIDS achieves improved robustness against attacks that distort the high-frequency components of container images.

Retrieval Specificity

How Does a Deep Learning Model Architecture Impact Its Privacy? A Comprehensive Study of Privacy Attacks on CNNs and Transformers

no code implementations20 Oct 2022 Guangsheng Zhang, Bo Liu, Huan Tian, Tianqing Zhu, Ming Ding, Wanlei Zhou

As a booming research area in the past decade, deep learning technologies have been driven by big data collected and processed on an unprecedented scale.

Attribute Deep Learning

Momentum Gradient Descent Federated Learning with Local Differential Privacy

no code implementations28 Sep 2022 Mengde Han, Tianqing Zhu, Wanlei Zhou

The major challenge is to find a way to guarantee that sensitive personal information is not disclosed while data is published and analyzed.

Federated Learning Privacy Preserving

BABD: A Bitcoin Address Behavior Dataset for Pattern Analysis

1 code implementation10 Apr 2022 Yuexin Xiang, Yuchen Lei, Ding Bao, Wei Ren, Tiantian Li, Qingqing Yang, Wenmao Liu, Tianqing Zhu, Kim-Kwang Raymond Choo

Cryptocurrencies are no longer just the preferred option for cybercriminal activities on darknets, due to the increasing adoption in mainstream applications.

Making DeepFakes more spurious: evading deep face forgery detection via trace removal attack

no code implementations22 Mar 2022 Chi Liu, Huajie Chen, Tianqing Zhu, Jun Zhang, Wanlei Zhou

To evaluate the attack efficacy, we crafted heterogeneous security scenarios where the detectors were embedded with different levels of defense and the attackers' background knowledge of data varies.

Face Swapping

One Parameter Defense -- Defending against Data Inference Attacks via Differential Privacy

no code implementations13 Mar 2022 Dayong Ye, Sheng Shen, Tianqing Zhu, Bo Liu, Wanlei Zhou

The experimental results show the method to be an effective and timely defense against both membership inference and model inversion attacks with no reduction in accuracy.

Label-only Model Inversion Attack: The Attack that Requires the Least Information

no code implementations13 Mar 2022 Dayong Ye, Tianqing Zhu, Shuai Zhou, Bo Liu, Wanlei Zhou

In launching a contemporary model inversion attack, the strategies discussed are generally based on either predicted confidence score vectors, i. e., black-box attacks, or the parameters of a target model, i. e., white-box attacks.

A Lightweight Privacy-Preserving Scheme Using Label-based Pixel Block Mixing for Image Classification in Deep Learning

1 code implementation19 May 2021 Yuexin Xiang, Tiantian Li, Wei Ren, Tianqing Zhu, Kim-Kwang Raymond Choo

Experimental findings on the testing set show that our scheme preserves image privacy while maintaining the availability of the training set in the deep learning models.

Data Augmentation Deep Learning +3

DP-Image: Differential Privacy for Image Data in Feature Space

no code implementations12 Mar 2021 Hanyu Xue, Bo Liu, Ming Ding, Tianqing Zhu, Dayong Ye, Li Song, Wanlei Zhou

The excessive use of images in social networks, government databases, and industrial applications has posed great privacy risks and raised serious concerns from the public.

From Distributed Machine Learning To Federated Learning: In The View Of Data Privacy And Security

no code implementations19 Oct 2020 Sheng Shen, Tianqing Zhu, Di wu, Wei Wang, Wanlei Zhou

Federated learning is an improved version of distributed machine learning that further offloads operations which would usually be performed by a central server.

Distributed, Parallel, and Cluster Computing

Correlated Differential Privacy: Feature Selection in Machine Learning

no code implementations7 Oct 2020 Tao Zhang, Tianqing Zhu, Ping Xiong, Huan Huo, Zahir Tari, Wanlei Zhou

In this way, the impact of data correlation is relieved with the proposed feature selection scheme, and moreover, the privacy issue of data correlation in learning is guaranteed.

BIG-bench Machine Learning feature selection +1

Fairness in Semi-supervised Learning: Unlabeled Data Help to Reduce Discrimination

no code implementations25 Sep 2020 Tao Zhang, Tianqing Zhu, Jing Li, Mengde Han, Wanlei Zhou, Philip S. Yu

A set of experiments on real-world and synthetic datasets show that our method is able to use unlabeled data to achieve a better trade-off between accuracy and discrimination.

BIG-bench Machine Learning Ensemble Learning +1

Fairness Constraints in Semi-supervised Learning

no code implementations14 Sep 2020 Tao Zhang, Tianqing Zhu, Mengde Han, Jing Li, Wanlei Zhou, Philip S. Yu

Extensive experiments show that our method is able to achieve fair semi-supervised learning, and reach a better trade-off between accuracy and fairness than fair supervised learning.

BIG-bench Machine Learning Fairness

Differentially Private Multi-Agent Planning for Logistic-like Problems

no code implementations16 Aug 2020 Dayong Ye, Tianqing Zhu, Sheng Shen, Wanlei Zhou, Philip S. Yu

To the best of our knowledge, this paper is the first to apply differential privacy to the field of multi-agent planning as a means of preserving the privacy of agents for logistic-like problems.

Privacy Preserving

Generating Image Adversarial Examples by Embedding Digital Watermarks

2 code implementations14 Aug 2020 Yuexin Xiang, Tiantian Li, Wei Ren, Tianqing Zhu, Kim-Kwang Raymond Choo

We devise an efficient mechanism to select host images and watermark images and utilize the improved discrete wavelet transform (DWT) based Patchwork watermarking algorithm with a set of valid hyperparameters to embed digital watermarks from the watermark image dataset into original images for generating image adversarial examples.

Local Differential Privacy and Its Applications: A Comprehensive Survey

no code implementations9 Aug 2020 Mengmeng Yang, Lingjuan Lyu, Jun Zhao, Tianqing Zhu, Kwok-Yan Lam

Local differential privacy (LDP), as a strong privacy tool, has been widely deployed in the real world in recent years.

Cryptography and Security

Cannot find the paper you are looking for? You can Submit a new open access paper.