no code implementations • 20 Dec 2021 • Giulio Zizzo, Ambrish Rawat, Mathieu Sinn, Sergio Maffeis, Chris Hankin
We model an attacker who poisons the model to insert a weakness into the adversarial training such that the model displays apparent adversarial robustness, while the attacker can exploit the inserted weakness to bypass the adversarial training and force the model to misclassify adversarial examples.
no code implementations • 5 May 2020 • Martín Barrère, Chris Hankin
In this paper, we present a novel MaxSAT-based technique to compute Maximum Probability Minimal Cut Sets (MPMCSs) in fault trees.
no code implementations • 8 Nov 2019 • Giulio Zizzo, Chris Hankin, Sergio Maffeis, Kevin Jones
In the continuous data domain our attack successfully hides the cyber-physical attacks requiring 2. 87 out of 12 monitored sensors to be compromised on average.
no code implementations • 9 Oct 2019 • Giulio Zizzo, Chris Hankin, Sergio Maffeis, Kevin Jones
The level of perturbation an attacker needs to introduce in order to cause such a misclassification can be extremely small, and often imperceptible.