Search Results for author: Stefanos Koffas

Found 12 papers, 2 papers with code

The SpongeNet Attack: Sponge Weight Poisoning of Deep Neural Networks

no code implementations9 Feb 2024 Jona te Lintelo, Stefanos Koffas, Stjepan Picek

SpongeNet is the first sponge attack that is performed directly on the parameters of a pre-trained model.

Dr. Jekyll and Mr. Hyde: Two Faces of LLMs

no code implementations6 Dec 2023 Matteo Gioele Collu, Tom Janssen-Groesbeek, Stefanos Koffas, Mauro Conti, Stjepan Picek

This work shows that by using adversarial personas, one can overcome safety mechanisms set out by ChatGPT and Bard.

Chatbot

Tabdoor: Backdoor Vulnerabilities in Transformer-based Neural Networks for Tabular Data

no code implementations13 Nov 2023 Bart Pleiter, Behrad Tajalli, Stefanos Koffas, Gorka Abad, Jing Xu, Martha Larson, Stjepan Picek

Our findings highlight the urgency of addressing such vulnerabilities and provide insights into potential countermeasures for securing DNN models against backdoors in tabular data.

Backdoor Attack

Label Inference Attacks against Node-level Vertical Federated GNNs

no code implementations4 Aug 2023 Marco Arazzi, Mauro Conti, Stefanos Koffas, Marina Krcek, Antonino Nocera, Stjepan Picek, Jing Xu

In this work, we are the first (to the best of our knowledge) to investigate label inference attacks on VFL using a zero-background knowledge strategy.

Node Classification Vertical Federated Learning

Towards Stealthy Backdoor Attacks against Speech Recognition via Elements of Sound

1 code implementation17 Jul 2023 Hanbo Cai, Pengcheng Zhang, Hai Dong, Yan Xiao, Stefanos Koffas, Yiming Li

Motivated by these findings, we propose to exploit elements of sound ($e. g.$, pitch and timbre) to design more stealthy yet effective poison-only backdoor attacks.

Backdoor Attack speech-recognition +1

SoK: A Systematic Evaluation of Backdoor Trigger Characteristics in Image Classification

no code implementations3 Feb 2023 Gorka Abad, Jing Xu, Stefanos Koffas, Behrad Tajalli, Stjepan Picek, Mauro Conti

Nevertheless, it is vulnerable to backdoor attacks that modify the training set to embed a secret functionality in the trained model.

Image Classification Transfer Learning

Going In Style: Audio Backdoors Through Stylistic Transformations

1 code implementation6 Nov 2022 Stefanos Koffas, Luca Pajola, Stjepan Picek, Mauro Conti

This work explores stylistic triggers for backdoor attacks in the audio domain: dynamic transformations of malicious samples through guitar effects.

Backdoor Attack

Dynamic Backdoors with Global Average Pooling

no code implementations4 Mar 2022 Stefanos Koffas, Stjepan Picek, Mauro Conti

It was recently shown that countermeasures in image classification, like Neural Cleanse and ABS, could be bypassed with dynamic triggers that are effective regardless of their pattern and location.

Classification Image Classification +2

More is Better (Mostly): On the Backdoor Attacks in Federated Graph Neural Networks

no code implementations7 Feb 2022 Jing Xu, Rui Wang, Stefanos Koffas, Kaitai Liang, Stjepan Picek

To further explore the properties of two backdoor attacks in Federated GNNs, we evaluate the attack performance for a different number of clients, trigger sizes, poisoning intensities, and trigger densities.

Federated Learning Privacy Preserving

Watermarking Graph Neural Networks based on Backdoor Attacks

no code implementations21 Oct 2021 Jing Xu, Stefanos Koffas, Oguzhan Ersoy, Stjepan Picek

The experiments show that our framework can verify the ownership of GNN models with a very high probability (up to $99\%$) for both tasks.

Graph Classification Model extraction +2

Cannot find the paper you are looking for? You can Submit a new open access paper.