Using Self-Supervised Learning Can Improve Model Robustness and Uncertainty

Self-supervision provides effective representations for downstream tasks without requiring labels. However, existing approaches lag behind fully supervised training and are often not thought beneficial beyond obviating or reducing the need for annotations... (read more)

PDF Abstract NeurIPS 2019 PDF NeurIPS 2019 Abstract
TASK DATASET MODEL METRIC NAME METRIC VALUE GLOBAL RANK USES EXTRA
TRAINING DATA
RESULT BENCHMARK
Out-of-Distribution Detection CIFAR-10 WRN 40-2 + Rotation Prediction FPR95 16.0 # 3
AUROC 96.2 # 4
Out-of-Distribution Detection CIFAR-10 vs CIFAR-100 WRN 40-2 + Rotation Prediction AUPR 67.7 # 3
Anomaly Detection One-class CIFAR-10 RotNet AUROC 83.3 # 5
Anomaly Detection One-class CIFAR-10 Supervised (OE) AUROC 87.3 # 3
Anomaly Detection One-class ImageNet-30 RotNet + Self-Attention AUROC 81.6 # 4
Anomaly Detection One-class ImageNet-30 RotNet + Translation + Self-Attention AUROC 84.8 # 3
Anomaly Detection One-class ImageNet-30 Supervised (OE) AUROC 56.1 # 7
Anomaly Detection One-class ImageNet-30 RotNet AUROC 65.3 # 6
Anomaly Detection One-class ImageNet-30 RotNet + Translation AUROC 77.9 # 5
Anomaly Detection One-class ImageNet-30 RotNet + Translation + Self-Attention + Resize AUROC 85.7 # 2

Methods used in the Paper