Search Results for author: Seyed-Mohsen Moosavi-Dezfooli

Found 32 papers, 16 papers with code

Universal adversarial perturbations

10 code implementations CVPR 2017 Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, Pascal Frossard

Given a state-of-the-art deep neural network classifier, we show the existence of a universal (image-agnostic) and very small perturbation vector that causes natural images to be misclassified with high probability.

Vehicle trajectory prediction works, but not everywhere

1 code implementation CVPR 2022 Mohammadhossein Bahari, Saeed Saadatnejad, Ahmad Rahimi, Mohammad Shaverdikondori, Amir-Hossein Shahidzadeh, Seyed-Mohsen Moosavi-Dezfooli, Alexandre Alahi

We further show that the generated scenes (i) are realistic since they do exist in the real world, and (ii) can be used to make existing models more robust, yielding 30-40 reductions in the off-road rate.

Scene Generation Self-Driving Cars +1

SparseFool: a few pixels make a big difference

1 code implementation CVPR 2019 Apostolos Modas, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard

Deep Neural Networks have achieved extraordinary results on image classification tasks, but have been shown to be vulnerable to attacks with carefully crafted perturbations of the input data.

Image Classification

PRIME: A few primitives can boost robustness to common corruptions

1 code implementation27 Dec 2021 Apostolos Modas, Rahul Rade, Guillermo Ortiz-Jiménez, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard

Despite their impressive performance on image classification tasks, deep networks have a hard time generalizing to unforeseen corruptions of their data.

Computational Efficiency Data Augmentation +2

GeoDA: a geometric framework for black-box adversarial attacks

1 code implementation CVPR 2020 Ali Rahmati, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard, Huaiyu Dai

We propose a geometric framework to generate adversarial examples in one of the most challenging black-box settings where the adversary can only generate a small number of queries, each of them returning the top-$1$ label of the classifier.

Hold me tight! Influence of discriminative features on deep network boundaries

1 code implementation NeurIPS 2020 Guillermo Ortiz-Jimenez, Apostolos Modas, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard

In this work, we borrow tools from the field of adversarial robustness, and propose a new perspective that relates dataset features to the distance of samples to the decision boundary.

Adversarial Robustness

What can linearized neural networks actually say about generalization?

1 code implementation NeurIPS 2021 Guillermo Ortiz-Jiménez, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard

For certain infinitely-wide neural networks, the neural tangent kernel (NTK) theory fully characterizes generalization, but for the networks used in practice, the empirical NTK only provides a rough first-order approximation.

Neural Anisotropy Directions

2 code implementations NeurIPS 2020 Guillermo Ortiz-Jimenez, Apostolos Modas, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard

In this work, we analyze the role of the network architecture in shaping the inductive bias of deep classifiers.

Inductive Bias

Geometric robustness of deep networks: analysis and improvement

1 code implementation CVPR 2018 Can Kanbak, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard

We propose ManiFool as a simple yet scalable algorithm to measure the invariance of deep networks.

Revisiting DeepFool: generalization and improvement

1 code implementation22 Mar 2023 Alireza Abdollahpourrostam, Mahed Abroshan, Seyed-Mohsen Moosavi-Dezfooli

Our proposed attacks are also suitable for evaluating the robustness of large models and can be used to perform adversarial training (AT) to achieve state-of-the-art robustness to minimal l2 adversarial perturbations.

Adversarial Attack Adversarial Robustness +1

How to choose your best allies for a transferable attack?

1 code implementation ICCV 2023 Thibault Maho, Seyed-Mohsen Moosavi-Dezfooli, Teddy Furon

The transferability of adversarial examples is a key issue in the security of deep neural networks.

A geometry-inspired decision-based attack

1 code implementation ICCV 2019 Yujia Liu, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard

The qFool method can drastically reduce the number of queries compared to previous decision-based attacks while reaching the same quality of adversarial examples.

General Classification Image Classification

Divide, Denoise, and Defend against Adversarial Attacks

no code implementations19 Feb 2018 Seyed-Mohsen Moosavi-Dezfooli, Ashish Shrivastava, Oncel Tuzel

Improving the robustness of neural networks against these attacks is important, especially for security-critical applications.

Denoising

Adaptive Quantization for Deep Neural Network

no code implementations4 Dec 2017 Yiren Zhou, Seyed-Mohsen Moosavi-Dezfooli, Ngai-Man Cheung, Pascal Frossard

First, we propose a measurement to estimate the effect of parameter quantization errors in individual layers on the overall model prediction accuracy.

Quantization

Robustness of classifiers to universal perturbations: a geometric perspective

no code implementations ICLR 2018 Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, Pascal Frossard, Stefano Soatto

Deep networks have recently been shown to be vulnerable to universal perturbations: there exist very small image-agnostic perturbations that cause most natural images to be misclassified by such classifiers.

Classification regions of deep neural networks

no code implementations26 May 2017 Alhussein Fawzi, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard, Stefano Soatto

The goal of this paper is to analyze the geometric properties of deep neural network classifiers in the input space.

Classification General Classification

Robustness of classifiers: from adversarial to random noise

no code implementations NeurIPS 2016 Alhussein Fawzi, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard

Moreover, we quantify the robustness of classifiers in terms of the subspace dimension in the semi-random noise regime, and show that our bounds remarkably interpolate between the worst-case and random noise regimes.

Empirical Study of the Topology and Geometry of Deep Networks

no code implementations CVPR 2018 Alhussein Fawzi, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard, Stefano Soatto

We specifically study the topology of classification regions created by deep networks, as well as their associated decision boundary.

General Classification

Optimism in the Face of Adversity: Understanding and Improving Deep Learning through Adversarial Robustness

no code implementations19 Oct 2020 Guillermo Ortiz-Jimenez, Apostolos Modas, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard

In this article, we provide an in-depth review of the field of adversarial robustness in deep learning, and give a self-contained introduction to its main notions.

Adversarial Robustness

A neural anisotropic view of underspecification in deep learning

no code implementations29 Apr 2021 Guillermo Ortiz-Jimenez, Itamar Franco Salazar-Reque, Apostolos Modas, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard

In this work, we propose to study this problem from a geometric perspective with the aim to understand two key characteristics of neural network solutions in underspecified settings: how is the geometry of the learned function related to the data representation?

Fairness Inductive Bias

Understanding Catastrophic Overfitting in Adversarial Training

no code implementations6 May 2021 Peilin Kang, Seyed-Mohsen Moosavi-Dezfooli

In this paper, we find CO is not only limited to FGSM, but also happens in $\mbox{DF}^{\infty}$-1 adversarial training.

Uniform Convergence, Adversarial Spheres and a Simple Remedy

no code implementations7 May 2021 Gregor Bachmann, Seyed-Mohsen Moosavi-Dezfooli, Thomas Hofmann

By considering a specific dataset, it was observed that a neural network completely misclassifies a projection of the training data (adversarial set), rendering any existing generalization bound based on uniform convergence vacuous.

Adversarial training may be a double-edged sword

no code implementations24 Jul 2021 Ali Rahmati, Seyed-Mohsen Moosavi-Dezfooli, Huaiyu Dai

Adversarial training has been shown as an effective approach to improve the robustness of image classifiers against white-box attacks.

An evaluation of quality and robustness of smoothed explanations

no code implementations29 Sep 2021 Ahmad Ajalloeian, Seyed-Mohsen Moosavi-Dezfooli, Michalis Vlachos, Pascal Frossard

However, a combination of additive and non-additive attacks can still manipulate these explanations, which reveals shortcomings in their robustness properties.

On the exploitative behavior of adversarial training against adversarial attacks

no code implementations29 Sep 2021 Ali Rahmati, Seyed-Mohsen Moosavi-Dezfooli, Huaiyu Dai

Adversarial attacks have been developed as intentionally designed perturbations added to the inputs in order to fool deep neural network classifiers.

Reducing Excessive Margin to Achieve a Better Accuracy vs. Robustness Trade-off

no code implementations ICLR 2022 Rahul Rade, Seyed-Mohsen Moosavi-Dezfooli

While adversarial training has become the de facto approach for training robust classifiers, it leads to a drop in accuracy.

The Enemy of My Enemy is My Friend: Exploring Inverse Adversaries for Improving Adversarial Training

no code implementations CVPR 2023 Junhao Dong, Seyed-Mohsen Moosavi-Dezfooli, JianHuang Lai, Xiaohua Xie

To circumvent this issue, we propose a novel adversarial training scheme that encourages the model to produce similar outputs for an adversarial example and its ``inverse adversarial'' counterpart.

Cannot find the paper you are looking for? You can Submit a new open access paper.