Search Results for author: Mahmood Sharif

Found 9 papers, 5 papers with code

Adversarial Robustness Through Artifact Design

no code implementations7 Feb 2024 Tsufit Shua, Mahmood Sharif

We evaluated our approach in the domain of traffic-sign recognition, allowing it to alter traffic-sign pictograms (i. e., symbols within the signs) and their colors.

Adversarial Robustness Traffic Sign Recognition

The Ultimate Combo: Boosting Adversarial Example Transferability by Composing Data Augmentations

no code implementations18 Dec 2023 Zebin Yun, Achi-Or Weingarten, Eyal Ronen, Mahmood Sharif

We also found that the best composition significantly outperformed the state of the art (e. g., 93. 7% vs. $\le$ 82. 7% average transferability on ImageNet from normally trained surrogates to adversarially trained targets).

Adversarial Robustness Data Augmentation

Group-based Robustness: A General Framework for Customized Robustness in the Real World

1 code implementation29 Jun 2023 Weiran Lin, Keane Lucas, Neo Eyal, Lujo Bauer, Michael K. Reiter, Mahmood Sharif

In this work, we identify real-world scenarios where the true threat cannot be assessed accurately by existing attacks.

Scalable Verification of GNN-based Job Schedulers

1 code implementation7 Mar 2022 Haoze Wu, Clark Barrett, Mahmood Sharif, Nina Narodytska, Gagandeep Singh

Recently, Graph Neural Networks (GNNs) have been applied for scheduling jobs over clusters, achieving better performance than hand-crafted heuristics.

Scheduling

Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks

1 code implementation28 Dec 2021 Weiran Lin, Keane Lucas, Lujo Bauer, Michael K. Reiter, Mahmood Sharif

First, we demonstrate a loss function that explicitly encodes (1) and show that Auto-PGD finds more attacks with it.

$n$-ML: Mitigating Adversarial Examples via Ensembles of Topologically Manipulated Classifiers

no code implementations19 Dec 2019 Mahmood Sharif, Lujo Bauer, Michael K. Reiter

This paper proposes a new defense called $n$-ML against adversarial examples, i. e., inputs crafted by perturbing benign inputs by small amounts to induce misclassifications by classifiers.

General Classification

On the Suitability of $L_p$-norms for Creating and Preventing Adversarial Examples

no code implementations27 Feb 2018 Mahmood Sharif, Lujo Bauer, Michael K. Reiter

Combined with prior work, we thus demonstrate that nearness of inputs as measured by $L_p$-norms is neither necessary nor sufficient for perceptual similarity, which has implications for both creating and defending against adversarial examples.

Perceptual Distance

A General Framework for Adversarial Examples with Objectives

3 code implementations31 Dec 2017 Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, Michael K. Reiter

Images perturbed subtly to be misclassified by neural networks, called adversarial examples, have emerged as a technically deep challenge and an important concern for several application domains.

Face Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.