Search Results for author: Tianqing Zhu

Found 30 papers, 4 papers with code

Generating Image Adversarial Examples by Embedding Digital Watermarks

2 code implementations14 Aug 2020 Yuexin Xiang, Tiantian Li, Wei Ren, Tianqing Zhu, Kim-Kwang Raymond Choo

We devise an efficient mechanism to select host images and watermark images and utilize the improved discrete wavelet transform (DWT) based Patchwork watermarking algorithm with a set of valid hyperparameters to embed digital watermarks from the watermark image dataset into original images for generating image adversarial examples.

BABD: A Bitcoin Address Behavior Dataset for Pattern Analysis

1 code implementation10 Apr 2022 Yuexin Xiang, Yuchen Lei, Ding Bao, Wei Ren, Tiantian Li, Qingqing Yang, Wenmao Liu, Tianqing Zhu, Kim-Kwang Raymond Choo

Cryptocurrencies are no longer just the preferred option for cybercriminal activities on darknets, due to the increasing adoption in mainstream applications.

A Lightweight Privacy-Preserving Scheme Using Label-based Pixel Block Mixing for Image Classification in Deep Learning

1 code implementation19 May 2021 Yuexin Xiang, Tiantian Li, Wei Ren, Tianqing Zhu, Kim-Kwang Raymond Choo

Experimental findings on the testing set show that our scheme preserves image privacy while maintaining the availability of the training set in the deep learning models.

Data Augmentation Image Classification +1

Differentially Private Multi-Agent Planning for Logistic-like Problems

no code implementations16 Aug 2020 Dayong Ye, Tianqing Zhu, Sheng Shen, Wanlei Zhou, Philip S. Yu

To the best of our knowledge, this paper is the first to apply differential privacy to the field of multi-agent planning as a means of preserving the privacy of agents for logistic-like problems.

Privacy Preserving

Fairness Constraints in Semi-supervised Learning

no code implementations14 Sep 2020 Tao Zhang, Tianqing Zhu, Mengde Han, Jing Li, Wanlei Zhou, Philip S. Yu

Extensive experiments show that our method is able to achieve fair semi-supervised learning, and reach a better trade-off between accuracy and fairness than fair supervised learning.

BIG-bench Machine Learning Fairness

Fairness in Semi-supervised Learning: Unlabeled Data Help to Reduce Discrimination

no code implementations25 Sep 2020 Tao Zhang, Tianqing Zhu, Jing Li, Mengde Han, Wanlei Zhou, Philip S. Yu

A set of experiments on real-world and synthetic datasets show that our method is able to use unlabeled data to achieve a better trade-off between accuracy and discrimination.

BIG-bench Machine Learning Ensemble Learning +1

Correlated Differential Privacy: Feature Selection in Machine Learning

no code implementations7 Oct 2020 Tao Zhang, Tianqing Zhu, Ping Xiong, Huan Huo, Zahir Tari, Wanlei Zhou

In this way, the impact of data correlation is relieved with the proposed feature selection scheme, and moreover, the privacy issue of data correlation in learning is guaranteed.

BIG-bench Machine Learning feature selection +1

Local Differential Privacy and Its Applications: A Comprehensive Survey

no code implementations9 Aug 2020 Mengmeng Yang, Lingjuan Lyu, Jun Zhao, Tianqing Zhu, Kwok-Yan Lam

Local differential privacy (LDP), as a strong privacy tool, has been widely deployed in the real world in recent years.

Cryptography and Security

From Distributed Machine Learning To Federated Learning: In The View Of Data Privacy And Security

no code implementations19 Oct 2020 Sheng Shen, Tianqing Zhu, Di wu, Wei Wang, Wanlei Zhou

Federated learning is an improved version of distributed machine learning that further offloads operations which would usually be performed by a central server.

Distributed, Parallel, and Cluster Computing

DP-Image: Differential Privacy for Image Data in Feature Space

no code implementations12 Mar 2021 Hanyu Xue, Bo Liu, Ming Ding, Tianqing Zhu, Dayong Ye, Li Song, Wanlei Zhou

The excessive use of images in social networks, government databases, and industrial applications has posed great privacy risks and raised serious concerns from the public.

One Parameter Defense -- Defending against Data Inference Attacks via Differential Privacy

no code implementations13 Mar 2022 Dayong Ye, Sheng Shen, Tianqing Zhu, Bo Liu, Wanlei Zhou

The experimental results show the method to be an effective and timely defense against both membership inference and model inversion attacks with no reduction in accuracy.

Label-only Model Inversion Attack: The Attack that Requires the Least Information

no code implementations13 Mar 2022 Dayong Ye, Tianqing Zhu, Shuai Zhou, Bo Liu, Wanlei Zhou

In launching a contemporary model inversion attack, the strategies discussed are generally based on either predicted confidence score vectors, i. e., black-box attacks, or the parameters of a target model, i. e., white-box attacks.

Making DeepFakes more spurious: evading deep face forgery detection via trace removal attack

no code implementations22 Mar 2022 Chi Liu, Huajie Chen, Tianqing Zhu, Jun Zhang, Wanlei Zhou

To evaluate the attack efficacy, we crafted heterogeneous security scenarios where the detectors were embedded with different levels of defense and the attackers' background knowledge of data varies.

Face Swapping

Momentum Gradient Descent Federated Learning with Local Differential Privacy

no code implementations28 Sep 2022 Mengde Han, Tianqing Zhu, Wanlei Zhou

The major challenge is to find a way to guarantee that sensitive personal information is not disclosed while data is published and analyzed.

Federated Learning Privacy Preserving

How Does a Deep Learning Model Architecture Impact Its Privacy? A Comprehensive Study of Privacy Attacks on CNNs and Transformers

no code implementations20 Oct 2022 Guangsheng Zhang, Bo Liu, Huan Tian, Tianqing Zhu, Ming Ding, Wanlei Zhou

As a booming research area in the past decade, deep learning technologies have been driven by big data collected and processed on an unprecedented scale.

Attribute

Low-frequency Image Deep Steganography: Manipulate the Frequency Distribution to Hide Secrets with Tenacious Robustness

no code implementations23 Mar 2023 Huajie Chen, Tianqing Zhu, Yuan Zhao, Bo Liu, Xin Yu, Wanlei Zhou

By avoiding high-frequency artifacts and manipulating the frequency distribution of the embedded feature map, LIDS achieves improved robustness against attacks that distort the high-frequency components of container images.

Retrieval Specificity

Towards Robust GAN-generated Image Detection: a Multi-view Completion Representation

no code implementations2 Jun 2023 Chi Liu, Tianqing Zhu, Sheng Shen, Wanlei Zhou

GAN-generated image detection now becomes the first line of defense against the malicious uses of machine-synthesized image manipulations such as deepfakes.

Machine Unlearning: A Survey

no code implementations6 Jun 2023 Heng Xu, Tianqing Zhu, Lefeng Zhang, Wanlei Zhou, Philip S. Yu

Machine learning has attracted widespread attention and evolved into an enabling technology for a wide range of highly successful applications, such as intelligent computer vision, speech recognition, medical diagnosis, and more.

Machine Unlearning Medical Diagnosis +2

Boosting Model Inversion Attacks with Adversarial Examples

no code implementations24 Jun 2023 Shuai Zhou, Tianqing Zhu, Dayong Ye, Xin Yu, Wanlei Zhou

Hence, in this paper, we propose a new training paradigm for a learning-based model inversion attack that can achieve higher attack accuracy in a black-box setting.

Generative Adversarial Networks Unlearning

no code implementations19 Aug 2023 Hui Sun, Tianqing Zhu, Wenhan Chang, Wanlei Zhou

Based on the substitution mechanism and fake label, we propose a cascaded unlearning approach for both item and class unlearning within GAN models, in which the unlearning and learning processes run in a cascaded manner.

Machine Unlearning

Divide and Ensemble: Progressively Learning for the Unknown

no code implementations9 Oct 2023 Hu Zhang, Xin Shen, Heming Du, Huiqiang Chen, Chen Liu, Hongwei Sheng, Qingzheng Xu, MD Wahiduzzaman Khan, Qingtao Yu, Tianqing Zhu, Scott Chapman, Zi Huang, Xin Yu

In the wheat nutrient deficiencies classification challenge, we present the DividE and EnseMble (DEEM) method for progressive test data predictions.

When Fairness Meets Privacy: Exploring Privacy Threats in Fair Binary Classifiers through Membership Inference Attacks

no code implementations7 Nov 2023 Huan Tian, Guangsheng Zhang, Bo Liu, Tianqing Zhu, Ming Ding, Wanlei Zhou

It leverages the difference in the predictions from both the original and fairness-enhanced models and exploits the observed prediction gaps as attack clues.

Fairness

Reinforcement Unlearning

no code implementations26 Dec 2023 Dayong Ye, Tianqing Zhu, Congcong Zhu, Derui Wang, Zewei Shi, Sheng Shen, Wanlei Zhou, Minhui Xue

Machine unlearning refers to the process of mitigating the influence of specific training data on machine learning models based on removal requests from data owners.

Inference Attack Machine Unlearning +1

AICAttack: Adversarial Image Captioning Attack with Attention-Based Optimization

no code implementations19 Feb 2024 Jiyao Li, Mingze Ni, Yifei Dong, Tianqing Zhu, Wei Liu

At the intersection of CV and NLP is the problem of image captioning, where the related models' robustness against adversarial attacks has not been well studied.

Adversarial Attack Image Captioning

The Frontier of Data Erasure: Machine Unlearning for Large Language Models

no code implementations23 Mar 2024 Youyang Qu, Ming Ding, Nan Sun, Kanchana Thilakarathna, Tianqing Zhu, Dusit Niyato

Large Language Models (LLMs) are foundational to AI advancements, facilitating applications like predictive text generation.

Machine Unlearning Text Generation

Machine Unlearning via Null Space Calibration

1 code implementation21 Apr 2024 Huiqiang Chen, Tianqing Zhu, Xin Yu, Wanlei Zhou

Current research centres on efficient unlearning to erase the influence of data from the model and neglects the subsequent impacts on the remaining data.

Cannot find the paper you are looking for? You can Submit a new open access paper.