no code implementations • 19 Mar 2024 • Masih Eskandar, Tooba Imtiaz, Zifeng Wang, Jennifer Dy
The performance of deep models, including Vision Transformers, is known to be vulnerable to adversarial attacks.
no code implementations • 14 Dec 2022 • Tooba Imtiaz, Morgan Kohler, Jared Miller, Zifeng Wang, Mario Sznaier, Octavia Camps, Jennifer Dy
Adversarial attacks hamper the decision-making ability of neural networks by perturbing the input signal.
no code implementations • 24 Mar 2021 • Jaesung Choe, Kyungdon Joo, Tooba Imtiaz, In So Kweon
The key idea of our network is to exploit sparse and accurate point clouds as a cue for guiding correspondences of stereo images in a unified 3D volume space.
no code implementations • 7 Oct 2020 • Chaoning Zhang, Philipp Benz, Tooba Imtiaz, In So Kweon
Since the proposed attack generates a universal adversarial perturbation that is discriminative to targeted and non-targeted classes, we term it class discriminative universal adversarial perturbation (CD-UAP).
1 code implementation • 7 Oct 2020 • Philipp Benz, Chaoning Zhang, Tooba Imtiaz, In So Kweon
This universal perturbation attacks one targeted source class to sink class, while having a limited adversarial effect on other non-targeted source classes, for avoiding raising suspicions.
no code implementations • 13 Jul 2020 • Philipp Benz, Chaoning Zhang, Tooba Imtiaz, In-So Kweon
We repeat the process of Data to Model (DtM) and Data from Model (DfM) in sequence and explore the loss of feature mapping information by measuring the accuracy drop on the original validation dataset.
1 code implementation • CVPR 2020 • Chaoning Zhang, Philipp Benz, Tooba Imtiaz, In-So Kweon
We utilize this vector representation to understand adversarial examples by disentangling the clean images and adversarial perturbations, and analyze their influence on each other.