1 code implementation • 30 Apr 2023 • Giovanni Apruzzese, Pavel Laskov, Johannes Schneider
Unfortunately, the value of ML for NID depends on a plethora of factors, such as hardware, that are often neglected in scientific literature.
no code implementations • 29 Dec 2022 • Giovanni Apruzzese, Hyrum S. Anderson, Savino Dambra, David Freeman, Fabio Pierazzi, Kevin A. Roundy
Recent years have seen a proliferation of research on adversarial machine learning.
no code implementations • 11 Dec 2022 • Giovanni Apruzzese, V. S. Subrahmanian
In this paper, we propose a set of Gray-Box attacks on PDs that an adversary may use which vary depending on the knowledge that he has about the PD.
1 code implementation • 24 Oct 2022 • Ying Yuan, Giovanni Apruzzese, Mauro Conti
By considering the application of ML for Phishing Website Detection (PWD), we formalize the "evasion-space" in which an adversarial perturbation can be introduced to fool a ML-PWD -- demonstrating that even perturbations in the "feature-space" are useful.
1 code implementation • 17 Oct 2022 • Pier Paolo Tricomi, Lisa Facciolo, Giovanni Apruzzese, Mauro Conti
This paper is the first to investigate such a problem.
no code implementations • 4 Jul 2022 • Giovanni Apruzzese, Rodion Vladimirov, Aliya Tastemirova, Pavel Laskov
ML, however, is known to be vulnerable to adversarial examples; moreover, as our paper will show, the 5G context is exposed to a yet another type of adversarial ML attacks that cannot be formalized with existing threat models.
no code implementations • 20 Jun 2022 • Giovanni Apruzzese, Pavel Laskov, Edgardo Montes de Oca, Wissam Mallouli, Luis Burdalo Rapa, Athanasios Vasileios Grammatopoulos, Fabio Di Franco
This paper is the first attempt to provide a holistic understanding of the role of ML in the entire cybersecurity domain -- to any potential reader with an interest in this topic.
2 code implementations • 18 May 2022 • Giovanni Apruzzese, Pavel Laskov, Aliya Tastemirova
A potential solution to this problem are semisupervised learning (SsL) methods, which combine small labelled datasets with large amounts of unlabelled data.
no code implementations • 18 Mar 2022 • Johannes Schneider, Giovanni Apruzzese
We propose to generate adversarial samples by modifying activations of upper layers encoding semantically meaningful concepts.
1 code implementation • 9 Mar 2022 • Giovanni Apruzzese, Luca Pajola, Mauro Conti
By using XeNIDS on six well-known datasets, we demonstrate the concealed potential, but also the risks, of cross-evaluations of ML-NIDS.
no code implementations • 15 Jun 2021 • Andrea Corsini, Shanchieh Jay Yang, Giovanni Apruzzese
Recent advances in deep learning renewed the research interests in machine learning for Network Intrusion Detection Systems (NIDS).
no code implementations • 9 Dec 2019 • Giovanni Apruzzese, Mauro Andreolini, Michele Colajanni, Mirco Marchetti
The experimental results on millions of labelled network flows show that the new detector has a twofold value: it outperforms state-of-the-art detectors that are subject to adversarial attacks; it exhibits robust results both in adversarial and non-adversarial scenarios.