Search Results for author: Apostolos Modas

Found 12 papers, 6 papers with code

SparseFool: a few pixels make a big difference

1 code implementation CVPR 2019 Apostolos Modas, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard

Deep Neural Networks have achieved extraordinary results on image classification tasks, but have been shown to be vulnerable to attacks with carefully crafted perturbations of the input data.

Image Classification

Multi-view shape estimation of transparent containers

1 code implementation27 Nov 2019 Alessio Xompero, Ricardo Sanchez-Matilla, Apostolos Modas, Pascal Frossard, Andrea Cavallaro

The 3D localisation of an object and the estimation of its properties, such as shape and dimensions, are challenging under varying degrees of transparency and lighting conditions.

Semantic Segmentation

Hold me tight! Influence of discriminative features on deep network boundaries

1 code implementation NeurIPS 2020 Guillermo Ortiz-Jimenez, Apostolos Modas, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard

In this work, we borrow tools from the field of adversarial robustness, and propose a new perspective that relates dataset features to the distance of samples to the decision boundary.

Adversarial Robustness

Neural Anisotropy Directions

2 code implementations NeurIPS 2020 Guillermo Ortiz-Jimenez, Apostolos Modas, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard

In this work, we analyze the role of the network architecture in shaping the inductive bias of deep classifiers.

Inductive Bias

Towards robust sensing for Autonomous Vehicles: An adversarial perspective

no code implementations14 Jul 2020 Apostolos Modas, Ricardo Sanchez-Matilla, Pascal Frossard, Andrea Cavallaro

Autonomous Vehicles rely on accurate and robust sensor observations for safety critical decision-making in a variety of conditions.

Autonomous Vehicles Decision Making

Optimism in the Face of Adversity: Understanding and Improving Deep Learning through Adversarial Robustness

no code implementations19 Oct 2020 Guillermo Ortiz-Jimenez, Apostolos Modas, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard

In this article, we provide an in-depth review of the field of adversarial robustness in deep learning, and give a self-contained introduction to its main notions.

Adversarial Robustness

A neural anisotropic view of underspecification in deep learning

no code implementations29 Apr 2021 Guillermo Ortiz-Jimenez, Itamar Franco Salazar-Reque, Apostolos Modas, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard

In this work, we propose to study this problem from a geometric perspective with the aim to understand two key characteristics of neural network solutions in underspecified settings: how is the geometry of the learned function related to the data representation?

Fairness Inductive Bias

PRIME: A few primitives can boost robustness to common corruptions

1 code implementation27 Dec 2021 Apostolos Modas, Rahul Rade, Guillermo Ortiz-Jiménez, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard

Despite their impressive performance on image classification tasks, deep networks have a hard time generalizing to unforeseen corruptions of their data.

Computational Efficiency Data Augmentation +2

Data augmentation with mixtures of max-entropy transformations for filling-level classification

no code implementations8 Mar 2022 Apostolos Modas, Andrea Cavallaro, Pascal Frossard

We address the problem of distribution shifts in test-time data with a principled data augmentation scheme for the task of content-level classification.

Data Augmentation Transfer Learning

Robustness and invariance properties of image classifiers

no code implementations30 Aug 2022 Apostolos Modas

We exploit the geometry of the decision boundaries of image classifiers for computing sparse perturbations very fast, and reveal a qualitative connection between adversarial examples and the data features that image classifiers learn.

Data Augmentation Image Classification +1

Ethical Considerations for Responsible Data Curation

1 code implementation NeurIPS 2023 Jerone T. A. Andrews, Dora Zhao, William Thong, Apostolos Modas, Orestis Papakyriakopoulos, Alice Xiang

Human-centric computer vision (HCCV) data curation practices often neglect privacy and bias concerns, leading to dataset retractions and unfair models.

Fairness

Cannot find the paper you are looking for? You can Submit a new open access paper.