no code implementations • 14 Jan 2022 • Masahiro Oda, Tomoaki Suito, Yuichiro Hayashi, Takayuki Kitasaka, Kazuhiro Furukawa, Ryoji Miyahara, Yoshiki Hirooka, Hidemi Goto, Gen Iinuma, Kazunari Misawa, Shigeru Nawano, Kensaku MORI
In this paper, we propose a semi-automated method for generating VU views of the stomach.
Our method recognizes and segments lung normal and infection regions in CT volumes.
We utilize the scale uncertainty among various receptive field sizes of a segmentation FCN to obtain infection regions.
Then we generate a Voronoi diagram to estimate the renal vascular dominant regions based on the segmented kidney and renal arteries.
Recent advances in deep learning, like 3D fully convolutional networks (FCNs), have improved the state-of-the-art in dense semantic segmentation of medical images.
However, recent advances in deep learning have made it possible to significantly improve the performance of image recognition and semantic segmentation methods in the field of computer vision.
In this work, we show that a multi-class 3D FCN trained on manually labeled CT scans of several anatomical structures (ranging from the large organs to thin vessels) can achieve competitive segmentation results, while avoiding the need for handcrafting features or training class-specific models.
Ranked #2 on 3D Medical Imaging Segmentation on TCIA Pancreas-CT
Deep learning-based methods achieved impressive results for the segmentation of medical images.
Pancreas segmentation in computed tomography imaging has been historically difficult for automated methods because of the large shape and size variations between patients.
In this work, we show that a multi-class 3D FCN trained on manually labeled CT scans of seven abdominal structures (artery, vein, liver, spleen, stomach, gallbladder, and pancreas) can achieve competitive segmentation results, while avoiding the need for handcrafting features or training organ-specific models.