Learning and Evaluating Representations for Deep One-class Classification

We present a two-stage framework for deep one-class classification. We first learn self-supervised representations from one-class data, and then build one-class classifiers on learned representations. The framework not only allows to learn better representations, but also permits building one-class classifiers that are faithful to the target task. We argue that classifiers inspired by the statistical perspective in generative or discriminative models are more effective than existing approaches, such as a normality score from a surrogate classifier. We thoroughly evaluate different self-supervised representation learning algorithms under the proposed framework for one-class classification. Moreover, we present a novel distribution-augmented contrastive learning that extends training distributions via data augmentation to obstruct the uniformity of contrastive representations. In experiments, we demonstrate state-of-the-art performance on visual domain one-class classification benchmarks, including novelty and anomaly detection. Finally, we present visual explanations, confirming that the decision-making process of deep one-class classifiers is intuitive to humans. The code is available at https://github.com/google-research/deep_representation_one_class.

PDF Abstract ICLR 2021 PDF ICLR 2021 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Anomaly Detection MVTec AD RotNet (MLP Head) Detection AUROC 86.3 # 83
Segmentation AUROC 93 # 74
Anomaly Detection MVTec AD DisAug CLR Detection AUROC 86.5 # 82
Segmentation AUROC 90.4 # 80
Anomaly Detection One-class CIFAR-10 DisAug CLR AUROC 92.5 # 12
Anomaly Detection One-class CIFAR-100 DisAug CLR AUROC 86.5 # 7
Anomaly Detection One-class CIFAR-100 Rotation Prediction AUROC 84.1 # 9

Methods