Lesion segmentation is the task of segmenting out lesions from other objects in medical based images.
( Image credit: D-UNet )
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
A discriminator model is then trained to predict if two lesion segmentations are based on scans acquired using the same scanner type or not, achieving a 78% accuracy in this task.
Domain adaptation in healthcare data is a potentially critical component in making computer-aided diagnostic systems applicable cross multiple sites and imaging scanners.
In this work, we propose a weakly-supervised co-segmentation model that first generates pseudo-masks from the RECIST slices and uses these as training labels for an attention-based convolutional neural network capable of segmenting common lesions from a pair of CT scans.
Lesion segmentation on computed tomography (CT) scans is an important step for precisely monitoring changes in lesion/tumor growth.
Ultrasound (US) is one of the most commonly used imaging modalities in both diagnosis and surgical interventions due to its low-cost, safety, and non-invasive characteristic.
The two-stage deep learning algorithm consists of U-Net segmentation model with the annotation scheme and CNN classifier model.
We train the network on 247 images with a variety of lesion sizes, complexity, and anatomical sites.
In this paper, we propose a diabetic retinopathy generative adversarial network (DR-GAN) to synthesize high-resolution fundus images which can be manipulated with arbitrary grading and lesion information.
In this paper, we demonstrate the feasibility and performance of deep residual neural networks for volumetric segmentation of irreversibly damaged brain tissue lesions on T1-weighted MRI scans for chronic stroke patients.
We implement a visual interpretability method Layer-wise Relevance Propagation (LRP) on top of 3D U-Net trained to perform lesion segmentation on the small dataset of multi-modal images provided by ISLES 2017 competition.