Zero-Shot Instance Segmentation

9 papers with code • 1 benchmarks • 1 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find Zero-Shot Instance Segmentation models and implementations

Datasets


Most implemented papers

Segment Anything

facebookresearch/segment-anything ICCV 2023

We introduce the Segment Anything (SA) project: a new task, model, and dataset for image segmentation.

EfficientViT: Multi-Scale Linear Attention for High-Resolution Dense Prediction

rwightman/pytorch-image-models 29 May 2022

Without performance loss on Cityscapes, our EfficientViT provides up to 13. 9$\times$ and 6. 2$\times$ GPU latency reduction over SegFormer and SegNeXt, respectively.

Zero-Shot Instance Segmentation

zhengye1995/Zero-shot-Instance-Segmentation CVPR 2021

We follow this motivation and propose a new task set named zero-shot instance segmentation (ZSI).

Segment Anything in High Quality

syscv/sam-hq NeurIPS 2023

HQ-SAM is only trained on the introduced detaset of 44k masks, which takes only 4 hours on 8 GPUs.

SupeRGB-D: Zero-shot Instance Segmentation in Cluttered Indoor Environments

evinpinar/supergb-d 22 Dec 2022

We introduce a zero-shot split for Tabletop Objects Dataset (TOD-Z) to enable this study and present a method that uses annotated objects to learn the ``objectness'' of pixels and generalize to unseen object categories in cluttered indoor environments.

Fast Segment Anything

casia-iva-lab/fastsam 21 Jun 2023

In this paper, we propose a speed-up alternative method for this fundamental task with comparable performance.

EfficientSAM: Leveraged Masked Image Pretraining for Efficient Segment Anything

yformer/EfficientSAM 1 Dec 2023

On segment anything task such as zero-shot instance segmentation, our EfficientSAMs with SAMI-pretrained lightweight image encoders perform favorably with a significant gain (e. g., ~4 AP on COCO/LVIS) over other fast SAM models.

EfficientViT-SAM: Accelerated Segment Anything Model Without Performance Loss

mit-han-lab/efficientvit 7 Feb 2024

For the training, we begin with the knowledge distillation from the SAM-ViT-H image encoder to EfficientViT.