Interactive Segmentation

64 papers with code • 14 benchmarks • 8 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find Interactive Segmentation models and implementations

Most implemented papers

Interactive Image Segmentation With First Click Attention

mindspore-ai/models CVPR 2020

In the task of interactive image segmentation, users initially click one point to segment the main body of the target object and then provide more points on mislabeled regions iteratively for a precise segmentation.

Medical Image Segmentation Using Deep Learning: A Survey

Liuhongzhi2018/Medical_images_research 28 Sep 2020

Firstly, compared to traditional surveys that directly divide literatures of deep learning on medical image segmentation into many groups and introduce literatures in detail for each group, we classify currently popular literatures according to a multi-level structure from coarse to fine.

MIDeepSeg: Minimally Interactive Segmentation of Unseen Objects from Medical Images Using Deep Learning

taigw/GeodisTK 25 Apr 2021

To solve these problems, we propose a novel deep learning-based interactive segmentation method that not only has high efficiency due to only requiring clicks as user inputs but also generalizes well to a range of previously unseen objects.

FocusCut: Diving Into a Focus View in Interactive Segmentation

frazerlin/focuscut CVPR 2022

However, the global view makes the model lose focus from later clicks, and is not in line with user intentions.

ECONet: Efficient Convolutional Online Likelihood Network for Scribble-based Interactive Segmentation

masadcv/econet-monailabel 12 Jan 2022

Automatic segmentation of lung lesions associated with COVID-19 in CT images requires large amount of annotated volumes.

SimpleClick: Interactive Image Segmentation with Simple Vision Transformers

uncbiag/simpleclick ICCV 2023

Although this design is simple and has been proven effective, it has not yet been explored for interactive image segmentation.

CLIP Surgery for Better Explainability with Enhancement in Open-Vocabulary Tasks

xmed-lab/clip_surgery 12 Apr 2023

Contrastive Language-Image Pre-training (CLIP) is a powerful multimodal large vision model that has demonstrated significant benefits for downstream tasks, including many zero-shot learning and text-guided vision tasks.

Segment Everything Everywhere All at Once

IDEA-Research/Grounded-Segment-Anything NeurIPS 2023

In SEEM, we propose a novel decoding mechanism that enables diverse prompting for all types of segmentation tasks, aiming at a universal segmentation interface that behaves like large language models (LLMs).

Segment Anything Model for Medical Image Analysis: an Experimental Study

mazurowski-lab/segment-anything-medical-evaluation 20 Apr 2023

We conclude that SAM shows impressive zero-shot segmentation performance for certain medical imaging datasets, but moderate to poor performance for others.

One-Prompt to Segment All Medical Images

wujunde/promptunet 17 May 2023

Tested on 14 previously unseen datasets, the One-Prompt Model showcases superior zero-shot segmentation capabilities, outperforming a wide range of related methods.