Search Results for author: Kaitai Liang

Found 12 papers, 1 papers with code

Low-Frequency Black-Box Backdoor Attack via Evolutionary Algorithm

no code implementations23 Feb 2024 Yanqi Qiao, Dazhuang Liu, Rui Wang, Kaitai Liang

Extensive experiments on real-world datasets verify the effectiveness and robustness of LFBA against image processing operations and the state-of-the-art backdoor defenses, as well as its inherent stealthiness in both spatial and frequency space, making it resilient against frequency inspection.

Backdoor Attack

Using Autoencoders on Differentially Private Federated Learning GANs

1 code implementation24 Jun 2022 Gregor Schram, Rui Wang, Kaitai Liang

In order to maintain user privacy, a combination of federated learning, differential privacy and GANs can be used to work with private data without giving away a users' privacy.

Avg Denoising +1

FLVoogd: Robust And Privacy Preserving Federated Learning

no code implementations24 Jun 2022 Yuhang Tian, Rui Wang, Yanqi Qiao, Emmanouil Panaousis, Kaitai Liang

In this work, we propose FLVoogd, an updated federated learning method in which servers and clients collaboratively eliminate Byzantine attacks while preserving privacy.

Federated Learning Image Classification +1

More is Better (Mostly): On the Backdoor Attacks in Federated Graph Neural Networks

no code implementations7 Feb 2022 Jing Xu, Rui Wang, Stefanos Koffas, Kaitai Liang, Stjepan Picek

To further explore the properties of two backdoor attacks in Federated GNNs, we evaluate the attack performance for a different number of clients, trigger sizes, poisoning intensities, and trigger densities.

Federated Learning Privacy Preserving

DEFEAT: Deep Hidden Feature Backdoor Attacks by Imperceptible Perturbation and Latent Representation Constraints

no code implementations CVPR 2022 Zhendong Zhao, Xiaojun Chen, Yuexin Xuan, Ye Dong, Dakui Wang, Kaitai Liang

Backdoor attack is a type of serious security threat to deep learning models. An adversary can provide users with a model trained on poisoned data to manipulate prediction behavior in test stage using a backdoor.

Backdoor Attack

HoneyCar: A Framework to Configure HoneypotVulnerabilities on the Internet of Vehicles

no code implementations3 Nov 2021 Sakshyam Panda, Stefan Rass, Sotiris Moschoyiannis, Kaitai Liang, George Loukas, Emmanouil Panaousis

By taking a game-theoretic approach, we model the adversarial interaction as a repeated imperfect-information zero-sum game in which the IoV network administrator chooses a set of vulnerabilities to offer in a honeypot and a strategic attacker chooses a vulnerability of the IoV to exploit under uncertainty.

FEVERLESS: Fast and Secure Vertical Federated Learning based on XGBoost for Decentralized Labels

no code implementations29 Sep 2021 Rui Wang, Oğuzhan Ersoy, Hangyu Zhu, Yaochu Jin, Kaitai Liang

Vertical Federated Learning (VFL) enables multiple clients to collaboratively train a global model over vertically partitioned data without revealing private local information.

Vertical Federated Learning

PIVODL: Privacy-preserving vertical federated learning over distributed labels

no code implementations25 Aug 2021 Hangyu Zhu, Rui Wang, Yaochu Jin, Kaitai Liang

Federated learning (FL) is an emerging privacy preserving machine learning protocol that allows multiple devices to collaboratively train a shared global model without revealing their private local data.

Privacy Preserving Vertical Federated Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.