Robustness study of noisy annotation in deep learning based medical image segmentation

10 Mar 2020  ·  Yu Shaode, Zhang Erlei, Wu Junjie, Yu Hang, Yang Zi, Ma Lin, Chen Mingli, Gu Xuejun, Lu Weiguo ·

Partly due to the use of exhaustive-annotated data, deep networks have achieved impressive performance on medical image segmentation. Medical imaging data paired with noisy annotation are, however, ubiquitous, but little is known about the effect of noisy annotation on deep learning-based medical image segmentation. We studied the effects of noisy annotation in the context of mandible segmentation from CT images. First, 202 images of Head and Neck cancer patients were collected from our clinical database, where the organs-at-risk were annotated by one of 12 planning dosimetrists. The mandibles were roughly annotated as the planning avoiding structure. Then, mandible labels were checked and corrected by a physician to get clean annotations. At last, by varying the ratios of noisy labels in the training data, deep learning-based segmentation models were trained, one for each ratio. In general, a deep network trained with noisy labels had worse segmentation results than that trained with clean labels, and fewer noisy labels led to better segmentation. When using 20% or less noisy cases for training, no significant difference was found on the prediction performance between the models trained by noisy or clean. This study suggests that deep learning-based medical image segmentation is robust to noisy annotations to some extent. It also highlights the importance of labeling quality in deep learning

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here