UNET Segmentation

9 papers with code • 1 benchmarks • 4 datasets

U-Net is an architecture for semantic segmentation. It consists of a contracting path (Up to down) and an expanding path (Down to up). During the contraction, the spatial information is reduced while feature information is increased. The contracting path follows the typical architecture of a convolutional network. It consists of the repeated application of two 3x3 convolutions (unpadded convolutions), each followed by a rectified linear unit (ReLU) and a 2x2 max pooling operation with stride 2 for downsampling. At each downsampling step, we double the number of feature channels. Every step in the expansive path consists of an upsampling of the feature map followed by a 2x2 convolution (“up-convolution”) that halves the number of feature channels, a concatenation with the correspondingly cropped feature map from the contracting path, and two 3x3 convolutions, each followed by a ReLU. The cropping is necessary due to the loss of border pixels in every convolution. At the final layer, a 1x1 convolution is used to map each 64-component feature vector to the desired number of classes. In total the network has 23 convolutional layers.

Most implemented papers

UCTransNet: Rethinking the Skip Connections in U-Net from a Channel-wise Perspective with Transformer

mcgregorwwww/uctransnet 9 Sep 2021

Specifically, the CTrans module is an alternate of the U-Net skip connections, which consists of a sub-module to conduct the multi-scale Channel Cross fusion with Transformer (named CCT) and a sub-module Channel-wise Cross-Attention (named CCA) to guide the fused multi-scale channel-wise information to effectively connect to the decoder features for eliminating the ambiguity.

Optimized High Resolution 3D Dense-U-Net Network for Brain and Spine Segmentation

mrkolarik/3d-brain-segmentation Applied Sciences 2019

The method has been evaluated on MRI brain 3D volumetric dataset and CT thoracic scan dataset for spine segmentation.

Fully Automated and Standardized Segmentation of Adipose Tissue Compartments by Deep Learning in Three-dimensional Whole-body MRI of Epidemiological Cohort Studies

lab-midas/med_segmentation 5 Aug 2020

Methods: Quantification and localization of different adipose tissue compartments from whole-body MR images is of high interest to examine metabolic conditions.

GaNDLF: A Generally Nuanced Deep Learning Framework for Scalable End-to-End Clinical Workflows in Medical Imaging

MECLabTUDA/M3d-Cam 26 Feb 2021

Deep Learning (DL) has the potential to optimize machine learning in both the scientific and clinical communities.

Segmentation of Drilled Holes in Texture Wooden Furniture Panels Using Deep Neural Network

rytisss/PanelsDrillSegmentation MDPI Sensors 2021

Drilling operations are an essential part of furniture from MDF laminated boards required for product assembly.

Sentinel 2 Time Series Analysis with 3D Feature Pyramid Network and Time Domain Class Activation Intervals for Crop Mapping

ignazio.gallo/sentinel-2-time-series-with-3d-fpn-and-time-domain-cai isprs: International Journal of Geo-information 2021

In this paper, we provide an innovative contribution in the research domain dedicated to crop mapping by exploiting the of Sentinel-2 satellite images time series, with the specific aim to extract information on “where and when” crops are grown.

Convolutional ProteinUnetLM competitive with long short-term memory-based protein secondary structure predictors

Kotrix/ProteinUnetLM Proteins 2022

In recent years, a new generation of algorithms for SS prediction based on embeddings from protein language models (pLMs) is emerging.

Automated Identification and Segmentation of Hi Sources in CRAFTS Using Deep Learning Method

fishszh/hisf 29 Mar 2024

We introduce a machine learning-based method for extracting HI sources from 3D spectral data, and construct a dedicated dataset of HI sources from CRAFTS.

AgileFormer: Spatially Agile Transformer UNet for Medical Image Segmentation

sotiraslab/AgileFormer 29 Mar 2024

However, we argue that the current design of the vision transformer-based UNet (ViT-UNet) segmentation models may not effectively handle the heterogeneous appearance (e. g., varying shapes and sizes) of objects of interest in medical image segmentation tasks.