Contextual Affinity Distillation for Image Anomaly Detection

6 Jul 2023  ·  Jie Zhang, Masanori Suganuma, Takayuki Okatani ·

Previous works on unsupervised industrial anomaly detection mainly focus on local structural anomalies such as cracks and color contamination. While achieving significantly high detection performance on this kind of anomaly, they are faced with logical anomalies that violate the long-range dependencies such as a normal object placed in the wrong position. In this paper, based on previous knowledge distillation works, we propose to use two students (local and global) to better mimic the teacher's behavior. The local student, which is used in previous studies mainly focuses on structural anomaly detection while the global student pays attention to logical anomalies. To further encourage the global student's learning to capture long-range dependencies, we design the global context condensing block (GCCB) and propose a contextual affinity loss for the student training and anomaly scoring. Experimental results show the proposed method doesn't need cumbersome training techniques and achieves a new state-of-the-art performance on the MVTec LOCO AD dataset.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Anomaly Detection MVTec LOCO AD DSKD Avg. Detection AUROC 84.0 # 13
Detection AUROC (only logical) 81.2 # 17
Detection AUROC (only structural) 86.9 # 16
Segmentation AU-sPRO (until FPR 5%) 73.0 # 4

Methods