Jointly Learning Convolutional Representations to Compress Radiological Images and Classify Thoracic Diseases in the Compressed Domain

Deep learning models trained in natural images are commonly used for different classification tasks in the medical domain. Generally, very high dimensional medical images are down-sampled by us- ing interpolation techniques before feeding them to deep learning models that are ImageNet compliant and accept only low-resolution images of size 224 × 224 px. This popular technique may lead to the loss of key information thus hampering the classification. Signifi- cant pathological features in medical images typically being small sized and highly affected. To combat this problem, we introduce a convolutional neural network (CNN) based classification approach which learns to reduce the resolution of the image using an autoen- coder and at the same time classify it using another network, while both the tasks are trained jointly. This algorithm guides the model to learn essential representations from high-resolution images for classification along with reconstruction. We have used the publicly available dataset of chest x-rays to evaluate this approach and have outperformed state-of-the-art on test data. Besides, we have experi- mented with the effects of different augmentation approaches in this dataset and report baselines using some well known ImageNet class of CNNs.

PDF

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Pneumonia Detection ChestX-ray14 AE-CNN AUROC 0.8241 # 5

Methods


No methods listed for this paper. Add relevant methods here