A Joint Spatial and Magnification Based Attention Framework for Large Scale Histopathology Classification

Deep learning has achieved great success in process- ing large size medical images such as histopathology slides. However, conventional deep learning methods cannot han- dle the enormous image sizes; instead, they split the im- age into patches which are exhaustively processed, usually through multi-instance learning approaches. Moreover and especially in histopathology, determining the most appro- priate magnification to generate these patches is also ex- haustive: a model needs to traverse all the possible magnifi- cations to select the optimal one. These limitations make the application of deep learning on large medical images and in particular histopathological images markedly inefficient. To tackle these problems, we propose a novel spatial and magnification based attention sampling strategy. First, we use a down-sampled large size image to estimate an atten- tion map that represents a spatial probability distribution of informative patches at different magnifications. Then a small number of patches are cropped from the large size medical image at certain magnifications based on the ob- tained attention. The final label of the large size image is predicted solely by these patches using an end-to-end train- ing strategy. Our experiments on two different histopathol- ogy datasets, the publicly available BACH and a subset of the TCGA-PRAD dataset, demonstrate that the proposed method runs 2.5 times faster with automatic magnification selection in training and at least 1.6 times faster than us- ing all patches in inference as the most of state-of-the-art methods do, without loosing in performance.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here