CompFeat: Comprehensive Feature Aggregation for Video Instance Segmentation

7 Dec 2020  ·  Yang Fu, Linjie Yang, Ding Liu, Thomas S. Huang, Humphrey Shi ·

Video instance segmentation is a complex task in which we need to detect, segment, and track each object for any given video. Previous approaches only utilize single-frame features for the detection, segmentation, and tracking of objects and they suffer in the video scenario due to several distinct challenges such as motion blur and drastic appearance change. To eliminate ambiguities introduced by only using single-frame features, we propose a novel comprehensive feature aggregation approach (CompFeat) to refine features at both frame-level and object-level with temporal and spatial context information. The aggregation process is carefully designed with a new attention mechanism which significantly increases the discriminative power of the learned features. We further improve the tracking capability of our model through a siamese design by incorporating both feature similarities and spatial similarities. Experiments conducted on the YouTube-VIS dataset validate the effectiveness of proposed CompFeat. Our code will be available at https://github.com/SHI-Labs/CompFeat-for-Video-Instance-Segmentation.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Video Instance Segmentation YouTube-VIS validation CompFeat(ResNet-50) mask AP 35.3 # 41
AP50 56.0 # 37
AP75 38.6 # 35
AR1 33.1 # 38
AR10 40.3 # 36

Methods


No methods listed for this paper. Add relevant methods here