no code implementations • 4 Jul 2022 • Giovanni Apruzzese, Rodion Vladimirov, Aliya Tastemirova, Pavel Laskov
ML, however, is known to be vulnerable to adversarial examples; moreover, as our paper will show, the 5G context is exposed to a yet another type of adversarial ML attacks that cannot be formalized with existing threat models.
no code implementations • 20 Jun 2022 • Giovanni Apruzzese, Pavel Laskov, Edgardo Montes de Oca, Wissam Mallouli, Luis Burdalo Rapa, Athanasios Vasileios Grammatopoulos, Fabio Di Franco
This paper is the first attempt to provide a holistic understanding of the role of ML in the entire cybersecurity domain -- to any potential reader with an interest in this topic.
1 code implementation • 18 May 2022 • Giovanni Apruzzese, Pavel Laskov, Aliya Tastemirova
A potential solution to this problem are semisupervised learning (SsL) methods, which combine small labelled datasets with large amounts of unlabelled data.
1 code implementation • 21 Aug 2017 • Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim Srndic, Pavel Laskov, Giorgio Giacinto, Fabio Roli
In security-sensitive applications, the success of machine learning depends on a thorough vetting of their resistance to adversarial data.
1 code implementation • 27 Jun 2012 • Battista Biggio, Blaine Nelson, Pavel Laskov
Such attacks inject specially crafted training data that increases the SVM's test error.
no code implementations • NeurIPS 2009 • Marius Kloft, Ulf Brefeld, Pavel Laskov, Klaus-Robert Müller, Alexander Zien, Sören Sonnenburg
Previous approaches to multiple kernel learning (MKL) promote sparse kernel combinations and hence support interpretability.