Search Results for author: Giulio Zizzo

Found 11 papers, 2 papers with code

A Robust Defense against Adversarial Attacks on Deep Learning-based Malware Detectors via (De)Randomized Smoothing

no code implementations23 Feb 2024 Daniel Gibert, Giulio Zizzo, Quan Le, Jordi Planes

Our findings reveal that the chunk-based smoothing classifiers exhibit greater resilience against adversarial malware examples generated with state-of-the-are evasion attacks, outperforming a non-smoothed classifier and a randomized smoothing-based classifier by a great margin.

Adversarial Robustness

Differentially Private and Adversarially Robust Machine Learning: An Empirical Evaluation

no code implementations18 Jan 2024 Janvi Thakkar, Giulio Zizzo, Sergio Maffeis

Malicious adversaries can attack machine learning models to infer sensitive information or damage the system by launching a series of evasion attacks.

Inference Attack Membership Inference Attack

Domain Adaptation for Time series Transformers using One-step fine-tuning

no code implementations12 Jan 2024 Subina Khanal, Seshu Tirupathi, Giulio Zizzo, Ambrish Rawat, Torben Bach Pedersen

To address these limitations, in this paper, we pre-train the time series Transformer model on a source domain with sufficient data and fine-tune it on the target domain with limited data.

Domain Adaptation Time Series +1

Elevating Defenses: Bridging Adversarial Training and Watermarking for Model Resilience

no code implementations21 Dec 2023 Janvi Thakkar, Giulio Zizzo, Sergio Maffeis

We use adversarial training together with adversarial watermarks to train a robust watermarked model.

Towards a Practical Defense against Adversarial Attacks on Deep Learning-based Malware Detectors via Randomized Smoothing

1 code implementation17 Aug 2023 Daniel Gibert, Giulio Zizzo, Quan Le

Malware detectors based on deep learning (DL) have been shown to be susceptible to malware examples that have been deliberately manipulated in order to evade detection, a. k. a.

Matching Pairs: Attributing Fine-Tuned Models to their Pre-Trained Large Language Models

1 code implementation15 Jun 2023 Myles Foley, Ambrish Rawat, Taesung Lee, Yufang Hou, Gabriele Picco, Giulio Zizzo

The wide applicability and adaptability of generative large language models (LLMs) has enabled their rapid adoption.

Robust Learning Protocol for Federated Tumor Segmentation Challenge

no code implementations16 Dec 2022 Ambrish Rawat, Giulio Zizzo, Swanand Kadhe, Jonathan P. Epperlein, Stefano Braghin

In this work, we devise robust and efficient learning protocols for orchestrating a Federated Learning (FL) process for the Federated Tumor Segmentation Challenge (FeTS 2022).

Federated Learning Tumor Segmentation

Certified Federated Adversarial Training

no code implementations20 Dec 2021 Giulio Zizzo, Ambrish Rawat, Mathieu Sinn, Sergio Maffeis, Chris Hankin

We model an attacker who poisons the model to insert a weakness into the adversarial training such that the model displays apparent adversarial robustness, while the attacker can exploit the inserted weakness to bypass the adversarial training and force the model to misclassify adversarial examples.

Adversarial Robustness Federated Learning

FAT: Federated Adversarial Training

no code implementations3 Dec 2020 Giulio Zizzo, Ambrish Rawat, Mathieu Sinn, Beat Buesser

Federated learning (FL) is one of the most important paradigms addressing privacy and data governance issues in machine learning (ML).

Adversarial Robustness Federated Learning

Adversarial Attacks on Time-Series Intrusion Detection for Industrial Control Systems

no code implementations8 Nov 2019 Giulio Zizzo, Chris Hankin, Sergio Maffeis, Kevin Jones

In the continuous data domain our attack successfully hides the cyber-physical attacks requiring 2. 87 out of 12 monitored sensors to be compromised on average.

Adversarial Attack Intrusion Detection +2

Deep Latent Defence

no code implementations9 Oct 2019 Giulio Zizzo, Chris Hankin, Sergio Maffeis, Kevin Jones

The level of perturbation an attacker needs to introduce in order to cause such a misclassification can be extremely small, and often imperceptible.

Cannot find the paper you are looking for? You can Submit a new open access paper.