MinVIS: A Minimal Video Instance Segmentation Framework without Video-based Training

3 Aug 2022  ·  De-An Huang, Zhiding Yu, Anima Anandkumar ·

We propose MinVIS, a minimal video instance segmentation (VIS) framework that achieves state-of-the-art VIS performance with neither video-based architectures nor training procedures. By only training a query-based image instance segmentation model, MinVIS outperforms the previous best result on the challenging Occluded VIS dataset by over 10% AP. Since MinVIS treats frames in training videos as independent images, we can drastically sub-sample the annotated frames in training videos without any modifications. With only 1% of labeled frames, MinVIS outperforms or is comparable to fully-supervised state-of-the-art approaches on YouTube-VIS 2019/2021. Our key observation is that queries trained to be discriminative between intra-frame object instances are temporally consistent and can be used to track instances without any manually designed heuristics. MinVIS thus has the following inference pipeline: we first apply the trained query-based image instance segmentation to video frames independently. The segmented instances are then tracked by bipartite matching of the corresponding queries. This inference is done in an online fashion and does not need to process the whole video at once. MinVIS thus has the practical advantages of reducing both the labeling costs and the memory requirements, while not sacrificing the VIS performance. Code is available at: https://github.com/NVlabs/MinVIS

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Video Instance Segmentation OVIS validation MinVIS (Swin-L) mask AP 39.4 # 19
AP50 61.5 # 18
AP75 41.3 # 16
AR1 18.1 # 11
AR10 43.3 # 14
Video Instance Segmentation YouTube-VIS 2021 MinVIS (Swin-L) mask AP 55.3 # 14
AP50 76.6 # 15
AP75 62 # 12
AR10 60.8 # 13
AR1 45.9 # 12
Video Instance Segmentation YouTube-VIS validation MinVIS (Swin-L) mask AP 61.6 # 13
AP50 83.3 # 13
AP75 68.6 # 9
AR1 54.8 # 10
AR10 66.6 # 11

Methods


No methods listed for this paper. Add relevant methods here