In this work, we investigate the impact of 13 data augmentation scenarios for melanoma classification trained on three CNNs (Inception-v4, ResNet, and DenseNet).
Particularly concerning are models with inconsistent performance on specific subgroups of a class, e. g., exhibiting disparities in skin cancer classification in the presence or absence of a spurious bandage.
In the first stage, we leverage the inter-class variation of the data distribution for the task of conditional image synthesis by learning the inter-class mapping and synthesizing under-represented class samples from the over-represented ones using unpaired image-to-image translation.
In this work, we address the problem of skin cancer classification using convolutional neural networks.
Several DL architectures have been proposed for classification, segmentation, and detection tasks in medical imaging and computational pathology.
Data augmentation has proved extremely useful by increasing training data variance to alleviate overfitting and improve deep neural networks' generalization performance.
The best ROC AUC values for melanoma and basal cell carcinoma are 94. 40% (ResNet 152) and 99. 30% (DenseNet 201) versus 82. 26% and 88. 82% of dermatologists, respectively.