Search Results for author: Mohammad A. A. K. Jalwana

Found 6 papers, 2 papers with code

Rethinking interpretation: Input-agnostic saliency mapping of deep visual classifiers

no code implementations31 Mar 2023 Naveed Akhtar, Mohammad A. A. K. Jalwana

Addressing the gap, we introduce a new perspective of input-agnostic saliency mapping that computationally estimates the high-level features attributed by the model to its outputs.

CAMERAS: Enhanced Resolution And Sanity preserving Class Activation Mapping for image saliency

1 code implementation CVPR 2021 Mohammad A. A. K. Jalwana, Naveed Akhtar, Mohammed Bennamoun, Ajmal Mian

Backpropagation image saliency aims at explaining model predictions by estimating model-centric importance of individual pixels in the input.

Orthogonal Deep Models As Defense Against Black-Box Attacks

no code implementations26 Jun 2020 Mohammad A. A. K. Jalwana, Naveed Akhtar, Mohammed Bennamoun, Ajmal Mian

On the other, deep learning has also been found vulnerable to adversarial attacks, which calls for new techniques to defend deep models against these attacks.

Attack to Explain Deep Representation

no code implementations CVPR 2020 Mohammad A. A. K. Jalwana, Naveed Akhtar, Mohammed Bennamoun, Ajmal Mian

The accumulated signal gradually manifests itself as a collection of visually salient features of the target label (in model fooling), casting adversarial perturbations as primitive features of the target label.

Image Generation Image Manipulation

Label Universal Targeted Attack

1 code implementation27 May 2019 Naveed Akhtar, Mohammad A. A. K. Jalwana, Mohammed Bennamoun, Ajmal Mian

We introduce Label Universal Targeted Attack (LUTA) that makes a deep model predict a label of attacker's choice for `any' sample of a given source class with high probability.

Cannot find the paper you are looking for? You can Submit a new open access paper.