1 code implementation • 22 Mar 2023 • Alireza Abdollahpourrostam, Mahed Abroshan, Seyed-Mohsen Moosavi-Dezfooli
Our proposed attacks are also suitable for evaluating the robustness of large models and can be used to perform adversarial training (AT) to achieve state-of-the-art robustness to minimal l2 adversarial perturbations.
no code implementations • 1 Nov 2022 • Junhao Dong, Seyed-Mohsen Moosavi-Dezfooli, JianHuang Lai, Xiaohua Xie
To circumvent this issue, we propose a novel adversarial training scheme that encourages the model to produce similar outputs for an adversarial example and its ``inverse adversarial'' counterpart.
1 code implementation • 27 Dec 2021 • Apostolos Modas, Rahul Rade, Guillermo Ortiz-Jiménez, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard
Despite their impressive performance on image classification tasks, deep networks have a hard time generalizing to unforeseen corruptions of their data.
Ranked #19 on
Domain Generalization
on ImageNet-C
1 code implementation • CVPR 2022 • Mohammadhossein Bahari, Saeed Saadatnejad, Ahmad Rahimi, Mohammad Shaverdikondori, Amir-Hossein Shahidzadeh, Seyed-Mohsen Moosavi-Dezfooli, Alexandre Alahi
We further show that the generated scenes (i) are realistic since they do exist in the real world, and (ii) can be used to make existing models more robust, yielding 30-40 reductions in the off-road rate.
no code implementations • ICLR 2022 • Rahul Rade, Seyed-Mohsen Moosavi-Dezfooli
While adversarial training has become the de facto approach for training robust classifiers, it leads to a drop in accuracy.
no code implementations • 29 Sep 2021 • Ali Rahmati, Seyed-Mohsen Moosavi-Dezfooli, Huaiyu Dai
Adversarial attacks have been developed as intentionally designed perturbations added to the inputs in order to fool deep neural network classifiers.
no code implementations • 29 Sep 2021 • Ahmad Ajalloeian, Seyed-Mohsen Moosavi-Dezfooli, Michalis Vlachos, Pascal Frossard
However, a combination of additive and non-additive attacks can still manipulate these explanations, which reveals shortcomings in their robustness properties.
2 code implementations • 24 Aug 2021 • Saeed Saadatnejad, Mohammadhossein Bahari, Pedram Khorsandi, Mohammad Saneian, Seyed-Mohsen Moosavi-Dezfooli, Alexandre Alahi
An attack is a small yet carefully-crafted perturbations to fail predictors.
no code implementations • 24 Jul 2021 • Ali Rahmati, Seyed-Mohsen Moosavi-Dezfooli, Huaiyu Dai
Adversarial training has been shown as an effective approach to improve the robustness of image classifiers against white-box attacks.
2 code implementations • ICMLW 2021 • Rahul Rade, Seyed-Mohsen Moosavi-Dezfooli
While adversarial training has become the de facto approach for training robust classifiers, it leads to a drop in accuracy.
1 code implementation • NeurIPS 2021 • Guillermo Ortiz-Jiménez, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard
For certain infinitely-wide neural networks, the neural tangent kernel (NTK) theory fully characterizes generalization, but for the networks used in practice, the empirical NTK only provides a rough first-order approximation.
no code implementations • 7 May 2021 • Gregor Bachmann, Seyed-Mohsen Moosavi-Dezfooli, Thomas Hofmann
By considering a specific dataset, it was observed that a neural network completely misclassifies a projection of the training data (adversarial set), rendering any existing generalization bound based on uniform convergence vacuous.
no code implementations • 6 May 2021 • Peilin Kang, Seyed-Mohsen Moosavi-Dezfooli
In this paper, we find CO is not only limited to FGSM, but also happens in $\mbox{DF}^{\infty}$-1 adversarial training.
no code implementations • 29 Apr 2021 • Guillermo Ortiz-Jimenez, Itamar Franco Salazar-Reque, Apostolos Modas, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard
In this work, we propose to study this problem from a geometric perspective with the aim to understand two key characteristics of neural network solutions in underspecified settings: how is the geometry of the learned function related to the data representation?
no code implementations • 19 Oct 2020 • Guillermo Ortiz-Jimenez, Apostolos Modas, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard
In this article, we provide an in-depth review of the field of adversarial robustness in deep learning, and give a self-contained introduction to its main notions.
2 code implementations • NeurIPS 2020 • Guillermo Ortiz-Jimenez, Apostolos Modas, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard
In this work, we analyze the role of the network architecture in shaping the inductive bias of deep classifiers.
1 code implementation • CVPR 2020 • Ali Rahmati, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard, Huaiyu Dai
We propose a geometric framework to generate adversarial examples in one of the most challenging black-box settings where the adversary can only generate a small number of queries, each of them returning the top-$1$ label of the classifier.
1 code implementation • NeurIPS 2020 • Guillermo Ortiz-Jimenez, Apostolos Modas, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard
In this work, we borrow tools from the field of adversarial robustness, and propose a new perspective that relates dataset features to the distance of samples to the decision boundary.
1 code implementation • ICCV 2019 • Yujia Liu, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard
The qFool method can drastically reduce the number of queries compared to previous decision-based attacks while reaching the same quality of adversarial examples.
1 code implementation • CVPR 2019 • Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Jonathan Uesato, Pascal Frossard
State-of-the-art classifiers have been shown to be largely vulnerable to adversarial perturbations.
1 code implementation • CVPR 2019 • Apostolos Modas, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard
Deep Neural Networks have achieved extraordinary results on image classification tasks, but have been shown to be vulnerable to attacks with carefully crafted perturbations of the input data.
no code implementations • CVPR 2018 • Alhussein Fawzi, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard, Stefano Soatto
We specifically study the topology of classification regions created by deep networks, as well as their associated decision boundary.
no code implementations • 19 Feb 2018 • Seyed-Mohsen Moosavi-Dezfooli, Ashish Shrivastava, Oncel Tuzel
Improving the robustness of neural networks against these attacks is important, especially for security-critical applications.
no code implementations • 4 Dec 2017 • Yiren Zhou, Seyed-Mohsen Moosavi-Dezfooli, Ngai-Man Cheung, Pascal Frossard
First, we propose a measurement to estimate the effect of parameter quantization errors in individual layers on the overall model prediction accuracy.
1 code implementation • CVPR 2018 • Can Kanbak, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard
We propose ManiFool as a simple yet scalable algorithm to measure the invariance of deep networks.
no code implementations • 26 May 2017 • Alhussein Fawzi, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard, Stefano Soatto
The goal of this paper is to analyze the geometric properties of deep neural network classifiers in the input space.
no code implementations • ICLR 2018 • Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, Pascal Frossard, Stefano Soatto
Deep networks have recently been shown to be vulnerable to universal perturbations: there exist very small image-agnostic perturbations that cause most natural images to be misclassified by such classifiers.
9 code implementations • CVPR 2017 • Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, Pascal Frossard
Given a state-of-the-art deep neural network classifier, we show the existence of a universal (image-agnostic) and very small perturbation vector that causes natural images to be misclassified with high probability.
no code implementations • NeurIPS 2016 • Alhussein Fawzi, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard
Moreover, we quantify the robustness of classifiers in terms of the subspace dimension in the semi-random noise regime, and show that our bounds remarkably interpolate between the worst-case and random noise regimes.
3 code implementations • CVPR 2016 • Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Pascal Frossard
State-of-the-art deep neural networks have achieved impressive results on many image classification tasks.