Contextualised Out-of-Distribution Detection using Pattern Identication

In this work, we propose CODE, an extension of existing work from the field of explainable AI that identifies class-specific recurring patterns to build a robust Out-of-Distribution (OoD) detection method for visual classifiers. CODE does not require any classifier retraining and is OoD-agnostic, i.e., tuned directly to the training dataset. Crucially, pattern identification allows us to provide images from the In-Distribution (ID) dataset as reference data to provide additional context to the confidence scores. In addition, we introduce a new benchmark based on perturbations of the ID dataset that provides a known and quantifiable measure of the discrepancy between the ID and OoD datasets serving as a reference value for the comparison between OoD detection methods.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here