no code implementations • 7 Feb 2024 • Tsufit Shua, Mahmood Sharif
We evaluated our approach in the domain of traffic-sign recognition, allowing it to alter traffic-sign pictograms (i. e., symbols within the signs) and their colors.
no code implementations • 18 Dec 2023 • Zebin Yun, Achi-Or Weingarten, Eyal Ronen, Mahmood Sharif
We also found that the best composition significantly outperformed the state of the art (e. g., 93. 7% vs. $\le$ 82. 7% average transferability on ImageNet from normally trained surrogates to adversarially trained targets).
1 code implementation • 29 Jun 2023 • Weiran Lin, Keane Lucas, Neo Eyal, Lujo Bauer, Michael K. Reiter, Mahmood Sharif
In this work, we identify real-world scenarios where the true threat cannot be assessed accurately by existing attacks.
1 code implementation • 7 Mar 2022 • Haoze Wu, Clark Barrett, Mahmood Sharif, Nina Narodytska, Gagandeep Singh
Recently, Graph Neural Networks (GNNs) have been applied for scheduling jobs over clusters, achieving better performance than hand-crafted heuristics.
1 code implementation • 28 Dec 2021 • Weiran Lin, Keane Lucas, Lujo Bauer, Michael K. Reiter, Mahmood Sharif
First, we demonstrate a loss function that explicitly encodes (1) and show that Auto-PGD finds more attacks with it.
1 code implementation • 19 Dec 2019 • Keane Lucas, Mahmood Sharif, Lujo Bauer, Michael K. Reiter, Saurabh Shintre
Moreover, we found that our attack can fool some commercial anti-viruses, in certain cases with a success rate of 85%.
no code implementations • 19 Dec 2019 • Mahmood Sharif, Lujo Bauer, Michael K. Reiter
This paper proposes a new defense called $n$-ML against adversarial examples, i. e., inputs crafted by perturbing benign inputs by small amounts to induce misclassifications by classifiers.
no code implementations • 27 Feb 2018 • Mahmood Sharif, Lujo Bauer, Michael K. Reiter
Combined with prior work, we thus demonstrate that nearness of inputs as measured by $L_p$-norms is neither necessary nor sufficient for perceptual similarity, which has implications for both creating and defending against adversarial examples.
3 code implementations • 31 Dec 2017 • Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, Michael K. Reiter
Images perturbed subtly to be misclassified by neural networks, called adversarial examples, have emerged as a technically deep challenge and an important concern for several application domains.