Brain Tumor Segmentation
123 papers with code • 9 benchmarks • 4 datasets
Brain Tumor Segmentation is a medical image analysis task that involves the separation of brain tumors from normal brain tissue in magnetic resonance imaging (MRI) scans. The goal of brain tumor segmentation is to produce a binary or multi-class segmentation map that accurately reflects the location and extent of the tumor.
( Image credit: Brain Tumor Segmentation with Deep Neural Networks )
Libraries
Use these libraries to find Brain Tumor Segmentation models and implementationsMost implemented papers
Diffusion Models for Implicit Image Segmentation Ensembles
By modifying the training and sampling scheme, we show that diffusion models can perform lesion segmentation of medical images.
Efficient Multi-Scale 3D CNN with Fully Connected CRF for Accurate Brain Lesion Segmentation
We propose a dual pathway, 11-layers deep, three-dimensional Convolutional Neural Network for the challenging task of brain lesion segmentation.
CNN-based Segmentation of Medical Imaging Data
While most of the existing literature on medical image segmentation focuses on soft tissue and the major organs, this work is validated on data both from the central nervous system as well as the bones of the hand.
SegAN: Adversarial Network with Multi-scale $L_1$ Loss for Medical Image Segmentation
Extensive experimental results demonstrate the effectiveness of the proposed SegAN with multi-scale loss: on BRATS 2013 SegAN gives performance comparable to the state-of-the-art for whole tumor and tumor core segmentation while achieves better precision and sensitivity for Gd-enhance tumor core segmentation; on BRATS 2015 SegAN achieves better performance than the state-of-the-art in both dice score and precision.
Lesion Focused Super-Resolution
Super-resolution (SR) for image enhancement has great importance in medical image applications.
Models Genesis: Generic Autodidactic Models for 3D Medical Image Analysis
More importantly, learning a model from scratch simply in 3D may not necessarily yield performance better than transfer learning from ImageNet in 2D, but our Models Genesis consistently top any 2D approaches including fine-tuning the models pre-trained from ImageNet as well as fine-tuning the 2D versions of our Models Genesis, confirming the importance of 3D anatomical information and significance of our Models Genesis for 3D medical imaging.
3D U-Net Based Brain Tumor Segmentation and Survival Days Prediction
Dice coefficients for enhancing tumor, tumor core, and the whole tumor are 0. 737, 0. 807 and 0. 894 respectively on the validation dataset.
Learning Semantics-enriched Representation via Self-discovery, Self-classification, and Self-restoration
To this end, we train deep models to learn semantically enriched visual representation by self-discovery, self-classification, and self-restoration of the anatomy underneath medical images, resulting in a semantics-enriched, general-purpose, pre-trained 3D model, named Semantic Genesis.
What is the best data augmentation for 3D brain tumor segmentation?
Training segmentation networks requires large annotated datasets, which in medical imaging can be hard to obtain.
TransBTS: Multimodal Brain Tumor Segmentation Using Transformer
To capture the local 3D context information, the encoder first utilizes 3D CNN to extract the volumetric spatial feature maps.