Self-supervised Multi-class Pre-training for Unsupervised Anomaly Detection and Segmentation in Medical Images

Unsupervised anomaly detection (UAD) that requires only normal (healthy) training images is an important tool for enabling the development of medical image analysis (MIA) applications, such as disease screening, since it is often difficult to collect and annotate abnormal (or disease) images in MIA. However, heavily relying on the normal images may cause the model training to overfit the normal class. Self-supervised pre-training is an effective solution to this problem. Unfortunately, current self-supervision methods adapted from computer vision are sub-optimal for MIA applications because they do not explore MIA domain knowledge for designing the pretext tasks or the training process. In this paper, we propose a new self-supervised pre-training method for UAD designed for MIA applications, named Multi-class Strong Augmentation via Contrastive Learning (MSACL). MSACL is based on a novel optimisation to contrast normal and multiple classes of synthetised abnormal images, with each class enforced to form a tight and dense cluster in terms of Euclidean distance and cosine similarity, where abnormal images are formed by simulating a varying number of lesions of different sizes and appearance in the normal images. In the experiments, we show that our MSACL pre-training improves the accuracy of SOTA UAD methods on many MIA benchmarks using colonoscopy, fundus screening and Covid-19 Chest X-ray datasets.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods