MinVIS: A Minimal Video Instance Segmentation Framework without Video-based Training

3 Aug 2022  ·  De-An Huang, Zhiding Yu, Anima Anandkumar ·

We propose MinVIS, a minimal video instance segmentation (VIS) framework that achieves state-of-the-art VIS performance with neither video-based architectures nor training procedures. By only training a query-based image instance segmentation model, MinVIS outperforms the previous best result on the challenging Occluded VIS dataset by over 10% AP. Since MinVIS treats frames in training videos as independent images, we can drastically sub-sample the annotated frames in training videos without any modifications. With only 1% of labeled frames, MinVIS outperforms or is comparable to fully-supervised state-of-the-art approaches on YouTube-VIS 2019/2021. Our key observation is that queries trained to be discriminative between intra-frame object instances are temporally consistent and can be used to track instances without any manually designed heuristics. MinVIS thus has the following inference pipeline: we first apply the trained query-based image instance segmentation to video frames independently. The segmented instances are then tracked by bipartite matching of the corresponding queries. This inference is done in an online fashion and does not need to process the whole video at once. MinVIS thus has the practical advantages of reducing both the labeling costs and the memory requirements, while not sacrificing the VIS performance. Code is available at: https://github.com/NVlabs/MinVIS

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Video Instance Segmentation OVIS validation MinVIS (Swin-L) mask AP 39.4 # 2
AP50 61.5 # 2
AP75 41.3 # 2
AR1 18.1 # 1
AR10 43.3 # 2
Video Instance Segmentation YouTube-VIS 2021 validation MinVIS (Swin-L) mask AP 55.3 # 2
AP50 76.6 # 3
AP75 62 # 2
AR10 60.8 # 1
AR1 45.9 # 1
Video Instance Segmentation YouTube-VIS validation MinVIS (Swin-L) mask AP 61.6 # 2
AP50 83.3 # 3
AP75 68.6 # 2
AR1 54.8 # 2
AR10 66.6 # 2

Methods


No methods listed for this paper. Add relevant methods here