MAF-Net: Multiple attention-guided fusion network for fundus vascular image segmentation

5 May 2023  ·  Yuanyuan Peng, Pengpeng Luan, Zixu Zhang ·

Accurately segmenting blood vessels in retinal fundus images is crucial in the early screening, diagnosing, and evaluating some ocular diseases, yet it poses a nontrivial uncertainty for the segmentation task due to various factors such as significant light variations, uneven curvilinear structures, and non-uniform contrast. As a result, a multiple attention-guided fusion network (MAF-Net) is proposed to accurately detect blood vessels in retinal fundus images. Currently, traditional UNet-based models may lose partial information due to explicitly modeling long-distance dependencies, which may lead to unsatisfactory results. To enrich contextual information for the loss of scene information compensation, an attention fusion mechanism that combines the channel attention with spatial attention mechanisms constructed by Transformer is employed to extract various features of blood vessels from retinal fundus images. Subsequently, a unique spatial attention mechanism is applied in the skip connection to filter out redundant information and noise from low-level features, thus enabling better integration with high-level features. In addition, a DropOut layer is employed to randomly discard some neurons, which can prevent overfitting of the deep learning network and improve its generalization performance. Experimental results were verified in public datasets DRIVE, STARE and CHASEDB1 with F1 scores of 0.818, 0.836 and 0.811, and Acc values of 0.968, 0.973 and 0.973, respectively. Both visual inspection and quantitative evaluation demonstrate that our method produces satisfactory results compared to some state-of-the-art methods.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods