Crossover Learning for Fast Online Video Instance Segmentation

Modeling temporal visual context across frames is critical for video instance segmentation (VIS) and other video understanding tasks. In this paper, we propose a fast online VIS model named CrossVIS. For temporal information modeling in VIS, we present a novel crossover learning scheme that uses the instance feature in the current frame to pixel-wisely localize the same instance in other frames. Different from previous schemes, crossover learning does not require any additional network parameters for feature enhancement. By integrating with the instance segmentation loss, crossover learning enables efficient cross-frame instance-to-pixel relation learning and brings cost-free improvement during inference. Besides, a global balanced instance embedding branch is proposed for more accurate and more stable online instance association. We conduct extensive experiments on three challenging VIS benchmarks, \ie, YouTube-VIS-2019, OVIS, and YouTube-VIS-2021 to evaluate our methods. To our knowledge, CrossVIS achieves state-of-the-art performance among all online VIS methods and shows a decent trade-off between latency and accuracy. Code will be available to facilitate future research.

PDF Abstract ICCV 2021 PDF ICCV 2021 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Video Instance Segmentation OVIS validation CrossVIS (ResNet-50, calibration) mask AP 18.1 # 20
AP50 35.5 # 21
AP75 16.9 # 20
Video Instance Segmentation OVIS validation CrossVIS (ResNet-50) mask AP 14.9 # 26
AP50 32.7 # 26
AP75 12.1 # 27
Video Instance Segmentation YouTube-VIS validation CrossVIS (ResNet-101) mask AP 36.6 # 27
AP50 57.3 # 27
AP75 39.7 # 25
AR1 36 # 25
AR10 42 # 24


No methods listed for this paper. Add relevant methods here