1 code implementation • CVPR 2019 • Apostolos Modas, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard
Deep Neural Networks have achieved extraordinary results on image classification tasks, but have been shown to be vulnerable to attacks with carefully crafted perturbations of the input data.
1 code implementation • 27 Dec 2021 • Apostolos Modas, Rahul Rade, Guillermo Ortiz-Jiménez, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard
Despite their impressive performance on image classification tasks, deep networks have a hard time generalizing to unforeseen corruptions of their data.
Ranked #28 on Domain Generalization on ImageNet-C
1 code implementation • NeurIPS 2020 • Guillermo Ortiz-Jimenez, Apostolos Modas, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard
In this work, we borrow tools from the field of adversarial robustness, and propose a new perspective that relates dataset features to the distance of samples to the decision boundary.
2 code implementations • NeurIPS 2020 • Guillermo Ortiz-Jimenez, Apostolos Modas, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard
In this work, we analyze the role of the network architecture in shaping the inductive bias of deep classifiers.
1 code implementation • 27 Nov 2019 • Alessio Xompero, Ricardo Sanchez-Matilla, Apostolos Modas, Pascal Frossard, Andrea Cavallaro
The 3D localisation of an object and the estimation of its properties, such as shape and dimensions, are challenging under varying degrees of transparency and lighting conditions.
1 code implementation • NeurIPS 2023 • Jerone T. A. Andrews, Dora Zhao, William Thong, Apostolos Modas, Orestis Papakyriakopoulos, Alice Xiang
Human-centric computer vision (HCCV) data curation practices often neglect privacy and bias concerns, leading to dataset retractions and unfair models.
no code implementations • 14 Jul 2020 • Apostolos Modas, Ricardo Sanchez-Matilla, Pascal Frossard, Andrea Cavallaro
Autonomous Vehicles rely on accurate and robust sensor observations for safety critical decision-making in a variety of conditions.
no code implementations • 19 Oct 2020 • Guillermo Ortiz-Jimenez, Apostolos Modas, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard
In this article, we provide an in-depth review of the field of adversarial robustness in deep learning, and give a self-contained introduction to its main notions.
no code implementations • 8 Feb 2021 • Apostolos Modas, Alessio Xompero, Ricardo Sanchez-Matilla, Pascal Frossard, Andrea Cavallaro
We investigate the problem of classifying - from a single image - the level of content in a cup or a drinking glass.
no code implementations • 29 Apr 2021 • Guillermo Ortiz-Jimenez, Itamar Franco Salazar-Reque, Apostolos Modas, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard
In this work, we propose to study this problem from a geometric perspective with the aim to understand two key characteristics of neural network solutions in underspecified settings: how is the geometry of the learned function related to the data representation?
no code implementations • 8 Mar 2022 • Apostolos Modas, Andrea Cavallaro, Pascal Frossard
We address the problem of distribution shifts in test-time data with a principled data augmentation scheme for the task of content-level classification.
no code implementations • 30 Aug 2022 • Apostolos Modas
We exploit the geometry of the decision boundaries of image classifiers for computing sparse perturbations very fast, and reveal a qualitative connection between adversarial examples and the data features that image classifiers learn.