Search Results for author: Sergio Maffeis

Found 7 papers, 1 papers with code

Differentially Private and Adversarially Robust Machine Learning: An Empirical Evaluation

no code implementations18 Jan 2024 Janvi Thakkar, Giulio Zizzo, Sergio Maffeis

Malicious adversaries can attack machine learning models to infer sensitive information or damage the system by launching a series of evasion attacks.

Inference Attack Membership Inference Attack

Elevating Defenses: Bridging Adversarial Training and Watermarking for Model Resilience

no code implementations21 Dec 2023 Janvi Thakkar, Giulio Zizzo, Sergio Maffeis

We use adversarial training together with adversarial watermarks to train a robust watermarked model.

VulBERTa: Simplified Source Code Pre-Training for Vulnerability Detection

1 code implementation25 May 2022 Hazim Hanif, Sergio Maffeis

This paper presents VulBERTa, a deep learning approach to detect security vulnerabilities in source code.

Vulnerability Detection

Certified Federated Adversarial Training

no code implementations20 Dec 2021 Giulio Zizzo, Ambrish Rawat, Mathieu Sinn, Sergio Maffeis, Chris Hankin

We model an attacker who poisons the model to insert a weakness into the adversarial training such that the model displays apparent adversarial robustness, while the attacker can exploit the inserted weakness to bypass the adversarial training and force the model to misclassify adversarial examples.

Adversarial Robustness Federated Learning

A Hybrid Graph Neural Network Approach for Detecting PHP Vulnerabilities

no code implementations16 Dec 2020 Rishi Rabheru, Hazim Hanif, Sergio Maffeis

This paper presents DeepTective, a deep learning approach to detect vulnerabilities in PHP source code.

Adversarial Attacks on Time-Series Intrusion Detection for Industrial Control Systems

no code implementations8 Nov 2019 Giulio Zizzo, Chris Hankin, Sergio Maffeis, Kevin Jones

In the continuous data domain our attack successfully hides the cyber-physical attacks requiring 2. 87 out of 12 monitored sensors to be compromised on average.

Adversarial Attack Intrusion Detection +2

Deep Latent Defence

no code implementations9 Oct 2019 Giulio Zizzo, Chris Hankin, Sergio Maffeis, Kevin Jones

The level of perturbation an attacker needs to introduce in order to cause such a misclassification can be extremely small, and often imperceptible.

Cannot find the paper you are looking for? You can Submit a new open access paper.