Robust Collaborative Learning of Patch-level and Image-level Annotations for Diabetic Retinopathy Grading from Fundus Image

Diabetic retinopathy (DR) grading from fundus images has attracted increasing interest in both academic and industrial communities. Most convolutional neural network (CNN) based algorithms treat DR grading as a classification task via image-level annotations. However, these algorithms have not fully explored the valuable information in the DR-related lesions. In this paper, we present a robust framework, which collaboratively utilizes patch-level and image-level annotations, for DR severity grading. By an end-to-end optimization, this framework can bi-directionally exchange the fine-grained lesion and image-level grade information. As a result, it exploits more discriminative features for DR grading. The proposed framework shows better performance than the recent state-of-the-art algorithms and three clinical ophthalmologists with over nine years of experience. By testing on datasets of different distributions (such as label and camera), we prove that our algorithm is robust when facing image quality and distribution variations that commonly exist in real-world practice. We inspect the proposed framework through extensive ablation studies to indicate the effectiveness and necessity of each motivation. The code and some valuable annotations are now publicly available.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods