no code implementations • 30 Apr 2024 • Marco Arazzi, Stefanos Koffas, Antonino Nocera, Stjepan Picek
In particular, the proposed attack can be carried out by one of the clients during the Federated Learning phase of FTL by identifying the optimal local for the trigger through XAI and encapsulating compressed information of the backdoor class.
no code implementations • 9 Feb 2024 • Jona te Lintelo, Stefanos Koffas, Stjepan Picek
SpongeNet is the first sponge attack that is performed directly on the parameters of a pre-trained model.
no code implementations • 6 Dec 2023 • Matteo Gioele Collu, Tom Janssen-Groesbeek, Stefanos Koffas, Mauro Conti, Stjepan Picek
This work shows that by using adversarial personas, one can overcome safety mechanisms set out by ChatGPT and Bard.
no code implementations • 13 Nov 2023 • Bart Pleiter, Behrad Tajalli, Stefanos Koffas, Gorka Abad, Jing Xu, Martha Larson, Stjepan Picek
Deep Neural Networks (DNNs) have shown great promise in various domains.
no code implementations • 12 Oct 2023 • Mauro Conti, Nicola Farronato, Stefanos Koffas, Luca Pajola, Stjepan Picek
Optical Character Recognition (OCR) is a widely used tool to extract text from scanned documents.
no code implementations • 4 Aug 2023 • Marco Arazzi, Mauro Conti, Stefanos Koffas, Marina Krcek, Antonino Nocera, Stjepan Picek, Jing Xu
In this work, we are the first (to the best of our knowledge) to investigate label inference attacks on VFL using a zero-background knowledge strategy.
1 code implementation • 17 Jul 2023 • Hanbo Cai, Pengcheng Zhang, Hai Dong, Yan Xiao, Stefanos Koffas, Yiming Li
Motivated by these findings, we propose to exploit elements of sound ($e. g.$, pitch and timbre) to design more stealthy yet effective poison-only backdoor attacks.
no code implementations • 3 Feb 2023 • Gorka Abad, Jing Xu, Stefanos Koffas, Behrad Tajalli, Stjepan Picek, Mauro Conti
Nevertheless, it is vulnerable to backdoor attacks that modify the training set to embed a secret functionality in the trained model.
1 code implementation • 6 Nov 2022 • Stefanos Koffas, Luca Pajola, Stjepan Picek, Mauro Conti
This work explores stylistic triggers for backdoor attacks in the audio domain: dynamic transformations of malicious samples through guitar effects.
no code implementations • 4 Mar 2022 • Stefanos Koffas, Stjepan Picek, Mauro Conti
It was recently shown that countermeasures in image classification, like Neural Cleanse and ABS, could be bypassed with dynamic triggers that are effective regardless of their pattern and location.
no code implementations • 7 Feb 2022 • Jing Xu, Rui Wang, Stefanos Koffas, Kaitai Liang, Stjepan Picek
To further explore the properties of two backdoor attacks in Federated GNNs, we evaluate the attack performance for a different number of clients, trigger sizes, poisoning intensities, and trigger densities.
no code implementations • 21 Oct 2021 • Jing Xu, Stefanos Koffas, Oguzhan Ersoy, Stjepan Picek
The experiments show that our framework can verify the ownership of GNN models with a very high probability (up to $99\%$) for both tasks.
no code implementations • 30 Jul 2021 • Stefanos Koffas, Jing Xu, Mauro Conti, Stjepan Picek
This work explores backdoor attacks for automatic speech recognition systems where we inject inaudible triggers.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +2