1 code implementation • 27 Feb 2024 • Tyler L. Hayes, César R. de Souza, Namil Kim, Jiwon Kim, Riccardo Volpi, Diane Larlus
In this work, we look at ways to extend a detector trained for a set of base classes so it can i) spot the presence of novel classes, and ii) automatically enrich its repertoire to be able to detect those newly discovered classes together with the base ones.
1 code implementation • 26 Feb 2024 • Pau de Jorge, Riccardo Volpi, Puneet K. Dokania, Philip H. S. Torr, Gregory Rogez
In our experiments, we present different anomaly segmentation datasets based on POC-generated data and show that POC can improve the performance of recent state-of-the-art anomaly fine-tuning methods in several standardized benchmarks.
no code implementations • 31 May 2023 • Subhankar Roy, Riccardo Volpi, Gabriela Csurka, Diane Larlus
Class-incremental semantic image segmentation assumes multiple model updates, each enriching the model to segment new categories.
1 code implementation • CVPR 2023 • Pau de Jorge, Riccardo Volpi, Philip Torr, Gregory Rogez
We analyze a broad variety of models, spanning from older ResNet-based architectures to novel transformers and assess their reliability based on four metrics: robustness, calibration, misclassification detection and out-of-distribution (OOD) detection.
no code implementations • 13 Feb 2023 • Gabriela Csurka, Riccardo Volpi, Boris Chidlovskii
Semantic image segmentation (SiS) plays a fundamental role in a broad variety of computer vision applications, providing key information for the global understanding of an image.
1 code implementation • CVPR 2022 • Riccardo Volpi, Pau de Jorge, Diane Larlus, Gabriela Csurka
We propose a new problem formulation and a corresponding evaluation framework to advance research on unsupervised domain adaptation for semantic image segmentation.
1 code implementation • 2 Feb 2022 • Pau de Jorge, Adel Bibi, Riccardo Volpi, Amartya Sanyal, Philip H. S. Torr, Grégory Rogez, Puneet K. Dokania
Recently, Wong et al. showed that adversarial training with single-step FGSM leads to a characteristic failure mode named Catastrophic Overfitting (CO), in which a model becomes suddenly vulnerable to multi-step attacks.
no code implementations • 6 Dec 2021 • Gabriela Csurka, Riccardo Volpi, Boris Chidlovskii
Semantic segmentation plays a fundamental role in a broad variety of computer vision applications, providing key information for the global understanding of an image.
no code implementations • 29 Sep 2021 • Pau de Jorge, Adel Bibi, Riccardo Volpi, Amartya Sanyal, Philip Torr, Grégory Rogez, Puneet K. Dokania
In this work, we methodically revisit the role of noise and clipping in single-step adversarial training.
1 code implementation • 24 Feb 2021 • Robert-George Colt, Csongor-Huba Várady, Riccardo Volpi, Luigi Malagò
We focus on automatic feature extraction for raw audio heartbeat sounds, aimed at anomaly detection applications in healthcare.
no code implementations • CVPR 2021 • Riccardo Volpi, Diane Larlus, Grégory Rogez
In this context, we show that one way to learn models that are inherently more robust against forgetting is domain randomization - for vision tasks, randomizing the current domain's distribution with heavy image manipulations.
no code implementations • 29 Nov 2020 • Hector J. Hortua, Riccardo Volpi, Dimitri Marinelli, Luigi Malago
Markov Chain Monte Carlo (MCMC) algorithms are commonly used for their versatility in sampling from complicated probability distributions.
1 code implementation • NeurIPS Workshop DL-IG 2020 • Csongor Várady, Riccardo Volpi, Luigi Malagò, Nihat Ay
These models are commonly trained using a two-step optimization algorithm called Wake-Sleep (WS) and more recently by improved versions, such as Reweighted Wake-Sleep (RWS) and Bidirectional Helmholtz Machines (BiHM).
no code implementations • 15 Aug 2020 • Hector J. Hortua, Luigi Malago, Riccardo Volpi
Bayesian Neural Networks (BNNs) often result uncalibrated after training, usually tending towards overconfidence.
no code implementations • WS 2020 • Riccardo Volpi, Luigi Malag{\`o}
Skip-Gram is a simple, but effective, model to learn a word embedding mapping by estimating a conditional probability distribution for each word of the dictionary.
no code implementations • 14 May 2020 • Héctor J. Hortúa, Luigi Malago, Riccardo Volpi
Additionally, we demonstrate the advantages of Normalizing Flows (NF) combined with BNNs, being able to model more complex output distributions and thus capture key information as non-Gaussianities in the parameter conditional density distribution for astrophysical and cosmological dataset.
no code implementations • 4 May 2020 • Héctor J. Hortúa, Riccardo Volpi, Luigi Malagò
Upcoming experiments such as Hydrogen Epoch of Reionization Array (HERA) and Square Kilometre Array (SKA) are intended to measure the 21cm signal over a wide range of redshifts, representing an incredible opportunity in advancing our understanding about the nature of cosmic Reionization.
1 code implementation • 13 Mar 2020 • Ruggero Ragonesi, Riccardo Volpi, Jacopo Cavazza, Vittorio Murino
We are interested in learning data-driven representations that can generalize well, even when trained on inherently biased data.
no code implementations • 13 Mar 2020 • Andrea Zunino, Sarah Adel Bargal, Riccardo Volpi, Mehrnoosh Sameki, Jianming Zhang, Stan Sclaroff, Vittorio Murino, Kate Saenko
Explanations are defined as regions of visual evidence upon which a deep classification network makes a decision.
2 code implementations • 9 Jan 2020 • Pietro Morerio, Riccardo Volpi, Ruggero Ragonesi, Vittorio Murino
We exploit this finding in an iterative procedure where a generative model and a classifier are jointly trained: in turn, the generator allows to sample cleaner data from the target distribution, and the classifier allows to associate better labels to target samples, progressively refining target pseudo-labels.
no code implementations • 4 Dec 2019 • Riccardo Volpi, Luigi Malagò
Learning an embedding for a large collection of items is a popular approach to overcome the computational limitations associated to one-hot encodings.
2 code implementations • 19 Nov 2019 • Hector J. Hortua, Riccardo Volpi, Dimitri Marinelli, Luigi Malagò
In the second part of the paper, we present a guide to the training and calibration of a successful multi-channel BNN for the CMB temperature and polarization map.
2 code implementations • ICCV 2019 • Riccardo Volpi, Vittorio Murino
We are concerned with the vulnerability of computer vision models to distributional shifts.
no code implementations • 5 Jul 2018 • Septimia Sârbu, Riccardo Volpi, Alexandra Peşte, Luigi Malagò
In this paper we propose two novel bounds for the log-likelihood based on Kullback-Leibler and the R\'{e}nyi divergences, which can be used for variational inference and in particular for the training of Variational AutoEncoders.
2 code implementations • NeurIPS 2018 • Riccardo Volpi, Hongseok Namkoong, Ozan Sener, John Duchi, Vittorio Murino, Silvio Savarese
Only using training data from a single source distribution, we propose an iterative procedure that augments the dataset with examples from a fictitious target domain that is "hard" under the current model.
2 code implementations • CVPR 2018 • Riccardo Volpi, Pietro Morerio, Silvio Savarese, Vittorio Murino
Recent works showed that Generative Adversarial Networks (GANs) can be successfully applied in unsupervised domain adaptation, where, given a labeled source dataset and an unlabeled target dataset, the goal is to train powerful classifiers for the target samples.
1 code implementation • ICLR 2018 • Aman Sinha, Hongseok Namkoong, Riccardo Volpi, John Duchi
Neural networks are vulnerable to adversarial examples and researchers have proposed many heuristic attack and defense mechanisms.
2 code implementations • ICCV 2017 • Pietro Morerio, Jacopo Cavazza, Riccardo Volpi, Rene Vidal, Vittorio Murino
This induces an adaptive regularization scheme that smoothly increases the difficulty of the optimization problem.
no code implementations • 11 Jan 2017 • Matteo Zanotto, Riccardo Volpi, Alessandro Maccione, Luca Berdondini, Diego Sona, Vittorio Murino
The idea was to figure out if binary latent states encode the regularities associated to different visual stimuli, as modes in the joint distribution.