Search Results for author: Francesco Pinto

Found 9 papers, 5 papers with code

PILLAR: How to make semi-private learning more effective

1 code implementation6 Jun 2023 Francesco Pinto, Yaxi Hu, Fanny Yang, Amartya Sanyal

In Semi-Supervised Semi-Private (SP) learning, the learner has access to both public unlabelled and private labelled data.

Not Just Pretty Pictures: Toward Interventional Data Augmentation Using Text-to-Image Generators

no code implementations21 Dec 2022 Jianhao Yuan, Francesco Pinto, Adam Davies, Philip Torr

Neural image classifiers are known to undergo severe performance degradation when exposed to inputs that exhibit covariate shifts with respect to the training distribution.

Domain Generalization Image Augmentation +1

An Impartial Take to the CNN vs Transformer Robustness Contest

no code implementations22 Jul 2022 Francesco Pinto, Philip H. S. Torr, Puneet K. Dokania

Following the surge of popularity of Transformers in Computer Vision, several studies have attempted to determine whether they could be more robust to distribution shifts and provide better uncertainty estimates than Convolutional Neural Networks (CNNs).

Sample-dependent Adaptive Temperature Scaling for Improved Calibration

1 code implementation13 Jul 2022 Tom Joy, Francesco Pinto, Ser-Nam Lim, Philip H. S. Torr, Puneet K. Dokania

The most common post-hoc approach to compensate for this is to perform temperature scaling, which adjusts the confidences of the predictions on any input by scaling the logits by a fixed value.

Out of Distribution (OOD) Detection

RegMixup: Mixup as a Regularizer Can Surprisingly Improve Accuracy and Out Distribution Robustness

2 code implementations29 Jun 2022 Francesco Pinto, Harry Yang, Ser-Nam Lim, Philip H. S. Torr, Puneet K. Dokania

We show that the effectiveness of the well celebrated Mixup [Zhang et al., 2018] can be further improved if instead of using it as the sole learning objective, it is utilized as an additional regularizer to the standard cross-entropy loss.

Out-of-Distribution Detection

Mix-MaxEnt: Creating High Entropy Barriers To Improve Accuracy and Uncertainty Estimates of Deterministic Neural Networks

no code implementations29 Sep 2021 Francesco Pinto, Harry Yang, Ser-Nam Lim, Philip Torr, Puneet K. Dokania

We propose an extremely simple approach to regularize a single deterministic neural network to obtain improved accuracy and reliable uncertainty estimates.

Cannot find the paper you are looking for? You can Submit a new open access paper.