Self-Supervised Video Object Segmentation by Motion-Aware Mask Propagation

27 Jul 2021  ·  Bo Miao, Mohammed Bennamoun, Yongsheng Gao, Ajmal Mian ·

We propose a self-supervised spatio-temporal matching method, coined Motion-Aware Mask Propagation (MAMP), for video object segmentation. MAMP leverages the frame reconstruction task for training without the need for annotations. During inference, MAMP extracts high-resolution features from each frame to build a memory bank from the features as well as the predicted masks of selected past frames. MAMP then propagates the masks from the memory bank to subsequent frames according to our proposed motion-aware spatio-temporal matching module to handle fast motion and long-term matching scenarios. Evaluation on DAVIS-2017 and YouTube-VOS datasets show that MAMP achieves state-of-the-art performance with stronger generalization ability compared to existing self-supervised methods, i.e., 4.2% higher mean J&F on DAVIS-2017 and 4.85% higher mean J&F on the unseen categories of YouTube-VOS than the nearest competitor. Moreover, MAMP performs at par with many supervised video object segmentation methods. Our code is available at: https://github.com/bo-miao/MAMP.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Semi-Supervised Video Object Segmentation DAVIS 2017 (val) MAMP Jaccard (Mean) 68.3 # 56
F-measure (Mean) 71.2 # 61
J&F 69.7 # 60
Semi-Supervised Video Object Segmentation YouTube-VOS 2018 MAMP F-Measure (Seen) 68.4 # 47
F-Measure (Unseen) 73.2 # 44
Overall 68.2 # 45
Jaccard (Seen) 67.0 # 48
Jaccard (Unseen) 64.5 # 43

Methods


No methods listed for this paper. Add relevant methods here