no code implementations • 31 Mar 2023 • Naveed Akhtar, Mohammad A. A. K. Jalwana
Addressing the gap, we introduce a new perspective of input-agnostic saliency mapping that computationally estimates the high-level features attributed by the model to its outputs.
1 code implementation • CVPR 2021 • Mohammad A. A. K. Jalwana, Naveed Akhtar, Mohammed Bennamoun, Ajmal Mian
Backpropagation image saliency aims at explaining model predictions by estimating model-centric importance of individual pixels in the input.
no code implementations • 26 Jun 2020 • Mohammad A. A. K. Jalwana, Naveed Akhtar, Mohammed Bennamoun, Ajmal Mian
On the other, deep learning has also been found vulnerable to adversarial attacks, which calls for new techniques to defend deep models against these attacks.
no code implementations • CVPR 2020 • Mohammad A. A. K. Jalwana, Naveed Akhtar, Mohammed Bennamoun, Ajmal Mian
The accumulated signal gradually manifests itself as a collection of visually salient features of the target label (in model fooling), casting adversarial perturbations as primitive features of the target label.
no code implementations • 14 Jun 2019 • Uzair Nadeem, Mohammad A. A. K. Jalwana, Mohammed Bennamoun, Roberto Togneri, Ferdous Sohel
We use this concept to localize the position and orientation (pose) of the camera of a query image in dense point clouds.
1 code implementation • 27 May 2019 • Naveed Akhtar, Mohammad A. A. K. Jalwana, Mohammed Bennamoun, Ajmal Mian
We introduce Label Universal Targeted Attack (LUTA) that makes a deep model predict a label of attacker's choice for `any' sample of a given source class with high probability.