Magnification Prior: A Self-Supervised Method for Learning Representations on Breast Cancer Histopathological Images

This work presents a novel self-supervised pre-training method to learn efficient representations without labels on histopathology medical images utilizing magnification factors. Other state-of-theart works mainly focus on fully supervised learning approaches that rely heavily on human annotations. However, the scarcity of labeled and unlabeled data is a long-standing challenge in histopathology. Currently, representation learning without labels remains unexplored for the histopathology domain. The proposed method, Magnification Prior Contrastive Similarity (MPCS), enables self-supervised learning of representations without labels on small-scale breast cancer dataset BreakHis by exploiting magnification factor, inductive transfer, and reducing human prior. The proposed method matches fully supervised learning state-of-the-art performance in malignancy classification when only 20% of labels are used in fine-tuning and outperform previous works in fully supervised learning settings. It formulates a hypothesis and provides empirical evidence to support that reducing human-prior leads to efficient representation learning in self-supervision. The implementation of this work is available online on GitHub -

PDF Abstract


Results from the Paper

 Ranked #1 on Breast Cancer Histology Image Classification on BreakHis (Accuracy (Inter-Patient) metric)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Breast Cancer Histology Image Classification BreakHis EfficientNet-b2 Accuracy (Inter-Patient) 92.15 # 1
1:1 Accuracy 92.23 # 1
Breast Cancer Histology Image Classification (20% labels) BreakHis EfficientNet-b2 1:1 Accuracy 88.77 # 1
Accuracy (Inter-Patient) 88.77 # 1