no code implementations • 18 Jan 2024 • Janvi Thakkar, Giulio Zizzo, Sergio Maffeis
Malicious adversaries can attack machine learning models to infer sensitive information or damage the system by launching a series of evasion attacks.
no code implementations • 21 Dec 2023 • Janvi Thakkar, Giulio Zizzo, Sergio Maffeis
We use adversarial training together with adversarial watermarks to train a robust watermarked model.
1 code implementation • 25 May 2022 • Hazim Hanif, Sergio Maffeis
This paper presents VulBERTa, a deep learning approach to detect security vulnerabilities in source code.
no code implementations • 20 Dec 2021 • Giulio Zizzo, Ambrish Rawat, Mathieu Sinn, Sergio Maffeis, Chris Hankin
We model an attacker who poisons the model to insert a weakness into the adversarial training such that the model displays apparent adversarial robustness, while the attacker can exploit the inserted weakness to bypass the adversarial training and force the model to misclassify adversarial examples.
no code implementations • 16 Dec 2020 • Rishi Rabheru, Hazim Hanif, Sergio Maffeis
This paper presents DeepTective, a deep learning approach to detect vulnerabilities in PHP source code.
no code implementations • 8 Nov 2019 • Giulio Zizzo, Chris Hankin, Sergio Maffeis, Kevin Jones
In the continuous data domain our attack successfully hides the cyber-physical attacks requiring 2. 87 out of 12 monitored sensors to be compromised on average.
no code implementations • 9 Oct 2019 • Giulio Zizzo, Chris Hankin, Sergio Maffeis, Kevin Jones
The level of perturbation an attacker needs to introduce in order to cause such a misclassification can be extremely small, and often imperceptible.