no code implementations • ICML Workshop AML 2021 • Luca Demetrio, Battista Biggio, Giovanni Lagorio, Alessandro Armando, Fabio Roli
Windows malware classifiers that rely on static analysis have been proven vulnerable to adversarial EXEmples, i. e., malware samples carefully manipulated to evade detection.
2 code implementations • 17 Aug 2020 • Luca Demetrio, Scott E. Coull, Battista Biggio, Giovanni Lagorio, Alessandro Armando, Fabio Roli
Recent work has shown that adversarial Windows malware samples - referred to as adversarial EXEmples in this paper - can bypass machine learning-based detection relying on static code analysis by perturbing relatively few input bytes.
2 code implementations • 30 Mar 2020 • Luca Demetrio, Battista Biggio, Giovanni Lagorio, Fabio Roli, Alessandro Armando
Windows malware detectors based on machine learning are vulnerable to adversarial examples, even if the attacker is only given black-box query access to the model.
Cryptography and Security
2 code implementations • 11 Jan 2019 • Luca Demetrio, Battista Biggio, Giovanni Lagorio, Fabio Roli, Alessandro Armando
Based on this finding, we propose a novel attack algorithm that generates adversarial malware binaries by only changing few tens of bytes in the file header.
Cryptography and Security