no code implementations • 23 Feb 2024 • Daniel Gibert, Giulio Zizzo, Quan Le, Jordi Planes
Our findings reveal that the chunk-based smoothing classifiers exhibit greater resilience against adversarial malware examples generated with state-of-the-are evasion attacks, outperforming a non-smoothed classifier and a randomized smoothing-based classifier by a great margin.
no code implementations • 18 Jan 2024 • Janvi Thakkar, Giulio Zizzo, Sergio Maffeis
Malicious adversaries can attack machine learning models to infer sensitive information or damage the system by launching a series of evasion attacks.
no code implementations • 12 Jan 2024 • Subina Khanal, Seshu Tirupathi, Giulio Zizzo, Ambrish Rawat, Torben Bach Pedersen
To address these limitations, in this paper, we pre-train the time series Transformer model on a source domain with sufficient data and fine-tune it on the target domain with limited data.
no code implementations • 21 Dec 2023 • Janvi Thakkar, Giulio Zizzo, Sergio Maffeis
We use adversarial training together with adversarial watermarks to train a robust watermarked model.
1 code implementation • 17 Aug 2023 • Daniel Gibert, Giulio Zizzo, Quan Le
Malware detectors based on deep learning (DL) have been shown to be susceptible to malware examples that have been deliberately manipulated in order to evade detection, a. k. a.
1 code implementation • 15 Jun 2023 • Myles Foley, Ambrish Rawat, Taesung Lee, Yufang Hou, Gabriele Picco, Giulio Zizzo
The wide applicability and adaptability of generative large language models (LLMs) has enabled their rapid adoption.
no code implementations • 16 Dec 2022 • Ambrish Rawat, Giulio Zizzo, Swanand Kadhe, Jonathan P. Epperlein, Stefano Braghin
In this work, we devise robust and efficient learning protocols for orchestrating a Federated Learning (FL) process for the Federated Tumor Segmentation Challenge (FeTS 2022).
no code implementations • 20 Dec 2021 • Giulio Zizzo, Ambrish Rawat, Mathieu Sinn, Sergio Maffeis, Chris Hankin
We model an attacker who poisons the model to insert a weakness into the adversarial training such that the model displays apparent adversarial robustness, while the attacker can exploit the inserted weakness to bypass the adversarial training and force the model to misclassify adversarial examples.
no code implementations • 3 Dec 2020 • Giulio Zizzo, Ambrish Rawat, Mathieu Sinn, Beat Buesser
Federated learning (FL) is one of the most important paradigms addressing privacy and data governance issues in machine learning (ML).
no code implementations • 8 Nov 2019 • Giulio Zizzo, Chris Hankin, Sergio Maffeis, Kevin Jones
In the continuous data domain our attack successfully hides the cyber-physical attacks requiring 2. 87 out of 12 monitored sensors to be compromised on average.
no code implementations • 9 Oct 2019 • Giulio Zizzo, Chris Hankin, Sergio Maffeis, Kevin Jones
The level of perturbation an attacker needs to introduce in order to cause such a misclassification can be extremely small, and often imperceptible.