Learning to Associate Every Segment for Video Panoptic Segmentation

Temporal correspondence - linking pixels or objects across frames - is a fundamental supervisory signal for the video models. For the panoptic understanding of dynamic scenes, we further extend this concept to every segment. Specifically, we aim to learn coarse segment-level matching and fine pixel-level matching together. We implement this idea by designing two novel learning objectives. To validate our proposals, we adopt a deep siamese model and train the model to learn the temporal correspondence on two different levels (i.e., segment and pixel) along with the target task. At inference time, the model processes each frame independently without any extra computation and post-processing. We show that our per-frame inference model can achieve new state-of-the-art results on Cityscapes-VPS and VIPER datasets. Moreover, due to its high efficiency, the model runs in a fraction of time (3x) compared to the previous state-of-the-art approach.

PDF Abstract CVPR 2021 PDF CVPR 2021 Abstract

Results from the Paper


Ranked #6 on Video Panoptic Segmentation on Cityscapes-VPS (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Video Panoptic Segmentation Cityscapes-VPS VPSNet-SiamTrack VPQ 57.3 # 6
VPQ (thing) 44.7 # 3
VPQ (stuff) 66.4 # 5

Methods


No methods listed for this paper. Add relevant methods here