Search Results for author: Tianqing Zhu

Found 19 papers, 4 papers with code

How Does a Deep Learning Model Architecture Impact Its Privacy?

no code implementations20 Oct 2022 Guangsheng Zhang, Bo Liu, Huan Tian, Tianqing Zhu, Ming Ding, Wanlei Zhou

We investigate several representative model architectures from CNNs to Transformers, and show that Transformers are generally more vulnerable to privacy attacks than CNNs.

Momentum Gradient Descent Federated Learning with Local Differential Privacy

no code implementations28 Sep 2022 Mengde Han, Tianqing Zhu, Wanlei Zhou

The major challenge is to find a way to guarantee that sensitive personal information is not disclosed while data is published and analyzed.

Federated Learning Privacy Preserving

BABD: A Bitcoin Address Behavior Dataset for Pattern Analysis

1 code implementation10 Apr 2022 Yuexin Xiang, Yuchen Lei, Ding Bao, Wei Ren, Tiantian Li, Qingqing Yang, Wenmao Liu, Tianqing Zhu, Kim-Kwang Raymond Choo

Cryptocurrencies are no longer just the preferred option for cybercriminal activities on darknets, due to the increasing adoption in mainstream applications.

Making DeepFakes more spurious: evading deep face forgery detection via trace removal attack

no code implementations22 Mar 2022 Chi Liu, Huajie Chen, Tianqing Zhu, Jun Zhang, Wanlei Zhou

To evaluate the attack efficacy, we crafted heterogeneous security scenarios where the detectors were embedded with different levels of defense and the attackers' background knowledge of data varies.

Face Swapping

Label-only Model Inversion Attack: The Attack that Requires the Least Information

no code implementations13 Mar 2022 Dayong Ye, Tianqing Zhu, Shuai Zhou, Bo Liu, Wanlei Zhou

In launching a contemporary model inversion attack, the strategies discussed are generally based on either predicted confidence score vectors, i. e., black-box attacks, or the parameters of a target model, i. e., white-box attacks.

One Parameter Defense -- Defending against Data Inference Attacks via Differential Privacy

no code implementations13 Mar 2022 Dayong Ye, Sheng Shen, Tianqing Zhu, Bo Liu, Wanlei Zhou

The experimental results show the method to be an effective and timely defense against both membership inference and model inversion attacks with no reduction in accuracy.

Semantic-Preserving Adversarial Text Attacks

2 code implementations23 Aug 2021 Xinghao Yang, Weifeng Liu, James Bailey, Tianqing Zhu, DaCheng Tao, Wei Liu

In this paper, we propose a Bigram and Unigram based adaptive Semantic Preservation Optimization (BU-SPO) method to examine the vulnerability of deep models.

Adversarial Text Semantic Similarity +3

A Lightweight Privacy-Preserving Scheme Using Label-based Pixel Block Mixing for Image Classification in Deep Learning

1 code implementation19 May 2021 Yuexin Xiang, Tiantian Li, Wei Ren, Tianqing Zhu, Kim-Kwang Raymond Choo

Experimental findings on the testing set show that our scheme preserves image privacy while maintaining the availability of the training set in the deep learning models.

Data Augmentation Image Classification +1

DP-Image: Differential Privacy for Image Data in Feature Space

no code implementations12 Mar 2021 Bo Liu, Ming Ding, Hanyu Xue, Tianqing Zhu, Dayong Ye, Li Song, Wanlei Zhou

The excessive use of images in social networks, government databases, and industrial applications has posed great privacy risks and raised serious concerns from the public.

From Distributed Machine Learning To Federated Learning: In The View Of Data Privacy And Security

no code implementations19 Oct 2020 Sheng Shen, Tianqing Zhu, Di wu, Wei Wang, Wanlei Zhou

Federated learning is an improved version of distributed machine learning that further offloads operations which would usually be performed by a central server.

Distributed, Parallel, and Cluster Computing

Correlated Differential Privacy: Feature Selection in Machine Learning

no code implementations7 Oct 2020 Tao Zhang, Tianqing Zhu, Ping Xiong, Huan Huo, Zahir Tari, Wanlei Zhou

In this way, the impact of data correlation is relieved with the proposed feature selection scheme, and moreover, the privacy issue of data correlation in learning is guaranteed.

BIG-bench Machine Learning Privacy Preserving

Fairness in Semi-supervised Learning: Unlabeled Data Help to Reduce Discrimination

no code implementations25 Sep 2020 Tao Zhang, Tianqing Zhu, Jing Li, Mengde Han, Wanlei Zhou, Philip S. Yu

A set of experiments on real-world and synthetic datasets show that our method is able to use unlabeled data to achieve a better trade-off between accuracy and discrimination.

BIG-bench Machine Learning Ensemble Learning +1

Fairness Constraints in Semi-supervised Learning

no code implementations14 Sep 2020 Tao Zhang, Tianqing Zhu, Mengde Han, Jing Li, Wanlei Zhou, Philip S. Yu

Extensive experiments show that our method is able to achieve fair semi-supervised learning, and reach a better trade-off between accuracy and fairness than fair supervised learning.

BIG-bench Machine Learning Fairness

Differentially Private Multi-Agent Planning for Logistic-like Problems

no code implementations16 Aug 2020 Dayong Ye, Tianqing Zhu, Sheng Shen, Wanlei Zhou, Philip S. Yu

To the best of our knowledge, this paper is the first to apply differential privacy to the field of multi-agent planning as a means of preserving the privacy of agents for logistic-like problems.

Privacy Preserving

Generating Image Adversarial Examples by Embedding Digital Watermarks

2 code implementations14 Aug 2020 Yuexin Xiang, Tiantian Li, Wei Ren, Tianqing Zhu, Kim-Kwang Raymond Choo

We devise an efficient mechanism to select host images and watermark images and utilize the improved discrete wavelet transform (DWT) based Patchwork watermarking algorithm with a set of valid hyperparameters to embed digital watermarks from the watermark image dataset into original images for generating image adversarial examples.

Local Differential Privacy and Its Applications: A Comprehensive Survey

no code implementations9 Aug 2020 Mengmeng Yang, Lingjuan Lyu, Jun Zhao, Tianqing Zhu, Kwok-Yan Lam

Local differential privacy (LDP), as a strong privacy tool, has been widely deployed in the real world in recent years.

Cryptography and Security

Cannot find the paper you are looking for? You can Submit a new open access paper.