no code implementations • 23 Feb 2024 • Yanqi Qiao, Dazhuang Liu, Rui Wang, Kaitai Liang
Extensive experiments on real-world datasets verify the effectiveness and robustness of LFBA against image processing operations and the state-of-the-art backdoor defenses, as well as its inherent stealthiness in both spatial and frequency space, making it resilient against frequency inspection.
no code implementations • 31 Aug 2023 • Yanqi Qiao, Dazhuang Liu, Congwen Chen, Rui Wang, Kaitai Liang
In this work, we propose a new stealthy and robust backdoor attack with flexible triggers against FL defenses.
no code implementations • 22 Aug 2022 • Rui Wang, Xingkai Wang, Huanhuan Chen, Jérémie Decouchant, Stjepan Picek, Nikolaos Laoutaris, Kaitai Liang
It is therefore currently impossible to ensure Byzantine robustness and confidentiality of updates without assuming a semi-honest majority.
no code implementations • 1 Jul 2022 • Ignjat Pejic, Rui Wang, Kaitai Liang
In this ML technique, only parameters and certain metadata would be communicated.
no code implementations • 24 Jun 2022 • Akash Amalan, Rui Wang, Yanqi Qiao, Emmanouil Panaousis, Kaitai Liang
Federated learning is an emerging concept in the domain of distributed machine learning.
1 code implementation • 24 Jun 2022 • Gregor Schram, Rui Wang, Kaitai Liang
In order to maintain user privacy, a combination of federated learning, differential privacy and GANs can be used to work with private data without giving away a users' privacy.
no code implementations • 24 Jun 2022 • Yuhang Tian, Rui Wang, Yanqi Qiao, Emmanouil Panaousis, Kaitai Liang
In this work, we propose FLVoogd, an updated federated learning method in which servers and clients collaboratively eliminate Byzantine attacks while preserving privacy.
no code implementations • 7 Feb 2022 • Jing Xu, Rui Wang, Stefanos Koffas, Kaitai Liang, Stjepan Picek
To further explore the properties of two backdoor attacks in Federated GNNs, we evaluate the attack performance for a different number of clients, trigger sizes, poisoning intensities, and trigger densities.
no code implementations • CVPR 2022 • Zhendong Zhao, Xiaojun Chen, Yuexin Xuan, Ye Dong, Dakui Wang, Kaitai Liang
Backdoor attack is a type of serious security threat to deep learning models. An adversary can provide users with a model trained on poisoned data to manipulate prediction behavior in test stage using a backdoor.
no code implementations • 3 Nov 2021 • Sakshyam Panda, Stefan Rass, Sotiris Moschoyiannis, Kaitai Liang, George Loukas, Emmanouil Panaousis
By taking a game-theoretic approach, we model the adversarial interaction as a repeated imperfect-information zero-sum game in which the IoV network administrator chooses a set of vulnerabilities to offer in a honeypot and a strategic attacker chooses a vulnerability of the IoV to exploit under uncertainty.
no code implementations • 29 Sep 2021 • Rui Wang, Oğuzhan Ersoy, Hangyu Zhu, Yaochu Jin, Kaitai Liang
Vertical Federated Learning (VFL) enables multiple clients to collaboratively train a global model over vertically partitioned data without revealing private local information.
no code implementations • 25 Aug 2021 • Hangyu Zhu, Rui Wang, Yaochu Jin, Kaitai Liang
Federated learning (FL) is an emerging privacy preserving machine learning protocol that allows multiple devices to collaboratively train a shared global model without revealing their private local data.