Search Results for author: Wanlei Zhou

Found 25 papers, 1 papers with code

Reinforcement Unlearning

no code implementations26 Dec 2023 Dayong Ye, Tianqing Zhu, Congcong Zhu, Derui Wang, Zewei Shi, Sheng Shen, Wanlei Zhou, Minhui Xue

Machine unlearning refers to the process of mitigating the influence of specific training data on machine learning models based on removal requests from data owners.

Inference Attack Machine Unlearning +1

When Fairness Meets Privacy: Exploring Privacy Threats in Fair Binary Classifiers through Membership Inference Attacks

no code implementations7 Nov 2023 Huan Tian, Guangsheng Zhang, Bo Liu, Tianqing Zhu, Ming Ding, Wanlei Zhou

It leverages the difference in the predictions from both the original and fairness-enhanced models and exploits the observed prediction gaps as attack clues.

Fairness

Generative Adversarial Networks Unlearning

no code implementations19 Aug 2023 Hui Sun, Tianqing Zhu, Wenhan Chang, Wanlei Zhou

Based on the substitution mechanism and fake label, we propose a cascaded unlearning approach for both item and class unlearning within GAN models, in which the unlearning and learning processes run in a cascaded manner.

Machine Unlearning

Robust Audio Anti-Spoofing with Fusion-Reconstruction Learning on Multi-Order Spectrograms

1 code implementation18 Aug 2023 Penghui Wen, Kun Hu, Wenxi Yue, Sen Zhang, Wanlei Zhou, Zhiyong Wang

Robust audio anti-spoofing has been increasingly challenging due to the recent advancements on deepfake techniques.

Face Swapping

Boosting Model Inversion Attacks with Adversarial Examples

no code implementations24 Jun 2023 Shuai Zhou, Tianqing Zhu, Dayong Ye, Xin Yu, Wanlei Zhou

Hence, in this paper, we propose a new training paradigm for a learning-based model inversion attack that can achieve higher attack accuracy in a black-box setting.

Machine Unlearning: A Survey

no code implementations6 Jun 2023 Heng Xu, Tianqing Zhu, Lefeng Zhang, Wanlei Zhou, Philip S. Yu

Machine learning has attracted widespread attention and evolved into an enabling technology for a wide range of highly successful applications, such as intelligent computer vision, speech recognition, medical diagnosis, and more.

Machine Unlearning Medical Diagnosis +2

Towards Robust GAN-generated Image Detection: a Multi-view Completion Representation

no code implementations2 Jun 2023 Chi Liu, Tianqing Zhu, Sheng Shen, Wanlei Zhou

GAN-generated image detection now becomes the first line of defense against the malicious uses of machine-synthesized image manipulations such as deepfakes.

Low-frequency Image Deep Steganography: Manipulate the Frequency Distribution to Hide Secrets with Tenacious Robustness

no code implementations23 Mar 2023 Huajie Chen, Tianqing Zhu, Yuan Zhao, Bo Liu, Xin Yu, Wanlei Zhou

By avoiding high-frequency artifacts and manipulating the frequency distribution of the embedded feature map, LIDS achieves improved robustness against attacks that distort the high-frequency components of container images.

Retrieval Specificity

How Does a Deep Learning Model Architecture Impact Its Privacy? A Comprehensive Study of Privacy Attacks on CNNs and Transformers

no code implementations20 Oct 2022 Guangsheng Zhang, Bo Liu, Huan Tian, Tianqing Zhu, Ming Ding, Wanlei Zhou

As a booming research area in the past decade, deep learning technologies have been driven by big data collected and processed on an unprecedented scale.

Attribute

Momentum Gradient Descent Federated Learning with Local Differential Privacy

no code implementations28 Sep 2022 Mengde Han, Tianqing Zhu, Wanlei Zhou

The major challenge is to find a way to guarantee that sensitive personal information is not disclosed while data is published and analyzed.

Federated Learning Privacy Preserving

Making DeepFakes more spurious: evading deep face forgery detection via trace removal attack

no code implementations22 Mar 2022 Chi Liu, Huajie Chen, Tianqing Zhu, Jun Zhang, Wanlei Zhou

To evaluate the attack efficacy, we crafted heterogeneous security scenarios where the detectors were embedded with different levels of defense and the attackers' background knowledge of data varies.

Face Swapping

Label-only Model Inversion Attack: The Attack that Requires the Least Information

no code implementations13 Mar 2022 Dayong Ye, Tianqing Zhu, Shuai Zhou, Bo Liu, Wanlei Zhou

In launching a contemporary model inversion attack, the strategies discussed are generally based on either predicted confidence score vectors, i. e., black-box attacks, or the parameters of a target model, i. e., white-box attacks.

One Parameter Defense -- Defending against Data Inference Attacks via Differential Privacy

no code implementations13 Mar 2022 Dayong Ye, Sheng Shen, Tianqing Zhu, Bo Liu, Wanlei Zhou

The experimental results show the method to be an effective and timely defense against both membership inference and model inversion attacks with no reduction in accuracy.

DP-Image: Differential Privacy for Image Data in Feature Space

no code implementations12 Mar 2021 Hanyu Xue, Bo Liu, Ming Ding, Tianqing Zhu, Dayong Ye, Li Song, Wanlei Zhou

The excessive use of images in social networks, government databases, and industrial applications has posed great privacy risks and raised serious concerns from the public.

From Distributed Machine Learning To Federated Learning: In The View Of Data Privacy And Security

no code implementations19 Oct 2020 Sheng Shen, Tianqing Zhu, Di wu, Wei Wang, Wanlei Zhou

Federated learning is an improved version of distributed machine learning that further offloads operations which would usually be performed by a central server.

Distributed, Parallel, and Cluster Computing

Can Steering Wheel Detect Your Driving Fatigue?

no code implementations18 Oct 2020 Jianchao Lu, Xi Zheng, Tianyi Zhang, Michael Sheng, Chen Wang, Jiong Jin, Shui Yu, Wanlei Zhou

In this paper, we propose a novel driver fatigue detection method by embedding surface electromyography (sEMG) sensors on a steering wheel.

Correlated Differential Privacy: Feature Selection in Machine Learning

no code implementations7 Oct 2020 Tao Zhang, Tianqing Zhu, Ping Xiong, Huan Huo, Zahir Tari, Wanlei Zhou

In this way, the impact of data correlation is relieved with the proposed feature selection scheme, and moreover, the privacy issue of data correlation in learning is guaranteed.

BIG-bench Machine Learning feature selection +1

Fairness in Semi-supervised Learning: Unlabeled Data Help to Reduce Discrimination

no code implementations25 Sep 2020 Tao Zhang, Tianqing Zhu, Jing Li, Mengde Han, Wanlei Zhou, Philip S. Yu

A set of experiments on real-world and synthetic datasets show that our method is able to use unlabeled data to achieve a better trade-off between accuracy and discrimination.

BIG-bench Machine Learning Ensemble Learning +1

Fairness Constraints in Semi-supervised Learning

no code implementations14 Sep 2020 Tao Zhang, Tianqing Zhu, Mengde Han, Jing Li, Wanlei Zhou, Philip S. Yu

Extensive experiments show that our method is able to achieve fair semi-supervised learning, and reach a better trade-off between accuracy and fairness than fair supervised learning.

BIG-bench Machine Learning Fairness

Differentially Private Multi-Agent Planning for Logistic-like Problems

no code implementations16 Aug 2020 Dayong Ye, Tianqing Zhu, Sheng Shen, Wanlei Zhou, Philip S. Yu

To the best of our knowledge, this paper is the first to apply differential privacy to the field of multi-agent planning as a means of preserving the privacy of agents for logistic-like problems.

Privacy Preserving

A Study of Data Pre-processing Techniques for Imbalanced Biomedical Data Classification

no code implementations4 Nov 2019 Shigang Liu, Jun Zhang, Yang Xiang, Wanlei Zhou, Dongxi Xiang

However, previous studies usually focused on different classifiers, and overlook the class imbalance problem in real-world biomedical datasets.

Drug Discovery feature selection +1

Cannot find the paper you are looking for? You can Submit a new open access paper.