Search Results for author: Pau de Jorge

Found 8 papers, 7 papers with code

Placing Objects in Context via Inpainting for Out-of-distribution Segmentation

1 code implementation26 Feb 2024 Pau de Jorge, Riccardo Volpi, Puneet K. Dokania, Philip H. S. Torr, Gregory Rogez

In our experiments, we present different anomaly segmentation datasets based on POC-generated data and show that POC can improve the performance of recent state-of-the-art anomaly fine-tuning methods in several standardized benchmarks.

Segmentation Semantic Segmentation

Reliability in Semantic Segmentation: Are We on the Right Track?

1 code implementation CVPR 2023 Pau de Jorge, Riccardo Volpi, Philip Torr, Gregory Rogez

We analyze a broad variety of models, spanning from older ResNet-based architectures to novel transformers and assess their reliability based on four metrics: robustness, calibration, misclassification detection and out-of-distribution (OOD) detection.

Out of Distribution (OOD) Detection Semantic Segmentation

Catastrophic overfitting can be induced with discriminative non-robust features

1 code implementation16 Jun 2022 Guillermo Ortiz-Jiménez, Pau de Jorge, Amartya Sanyal, Adel Bibi, Puneet K. Dokania, Pascal Frossard, Gregory Rogéz, Philip H. S. Torr

Through extensive experiments we analyze this novel phenomenon and discover that the presence of these easy features induces a learning shortcut that leads to CO. Our findings provide new insights into the mechanisms of CO and improve our understanding of the dynamics of AT.

Robust classification

On the Road to Online Adaptation for Semantic Image Segmentation

1 code implementation CVPR 2022 Riccardo Volpi, Pau de Jorge, Diane Larlus, Gabriela Csurka

We propose a new problem formulation and a corresponding evaluation framework to advance research on unsupervised domain adaptation for semantic image segmentation.

Image Segmentation Segmentation +2

Make Some Noise: Reliable and Efficient Single-Step Adversarial Training

1 code implementation2 Feb 2022 Pau de Jorge, Adel Bibi, Riccardo Volpi, Amartya Sanyal, Philip H. S. Torr, Grégory Rogez, Puneet K. Dokania

Recently, Wong et al. showed that adversarial training with single-step FGSM leads to a characteristic failure mode named Catastrophic Overfitting (CO), in which a model becomes suddenly vulnerable to multi-step attacks.

Towards fast and effective single-step adversarial training

no code implementations29 Sep 2021 Pau de Jorge, Adel Bibi, Riccardo Volpi, Amartya Sanyal, Philip Torr, Grégory Rogez, Puneet K. Dokania

In this work, we methodically revisit the role of noise and clipping in single-step adversarial training.

Progressive Skeletonization: Trimming more fat from a network at initialization

1 code implementation ICLR 2021 Pau de Jorge, Amartya Sanyal, Harkirat S. Behl, Philip H. S. Torr, Gregory Rogez, Puneet K. Dokania

Recent studies have shown that skeletonization (pruning parameters) of networks \textit{at initialization} provides all the practical benefits of sparsity both at inference and training time, while only marginally degrading their performance.

Cannot find the paper you are looking for? You can Submit a new open access paper.