Pixel-Level Bijective Matching for Video Object Segmentation

4 Oct 2021  ·  Suhwan Cho, Heansung Lee, Minjung Kim, Sungjun Jang, Sangyoun Lee ·

Semi-supervised video object segmentation (VOS) aims to track the designated objects present in the initial frame of a video at the pixel level. To fully exploit the appearance information of an object, pixel-level feature matching is widely used in VOS. Conventional feature matching runs in a surjective manner, i.e., only the best matches from the query frame to the reference frame are considered. Each location in the query frame refers to the optimal location in the reference frame regardless of how often each reference frame location is referenced. This works well in most cases and is robust against rapid appearance variations, but may cause critical errors when the query frame contains background distractors that look similar to the target object. To mitigate this concern, we introduce a bijective matching mechanism to find the best matches from the query frame to the reference frame and vice versa. Before finding the best matches for the query frame pixels, the optimal matches for the reference frame pixels are first considered to prevent each reference frame pixel from being overly referenced. As this mechanism operates in a strict manner, i.e., pixels are connected if and only if they are the sure matches for each other, it can effectively eliminate background distractors. In addition, we propose a mask embedding module to improve the existing mask propagation method. By embedding multiple historic masks with coordinate information, it can effectively capture the position information of a target object.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Semi-Supervised Video Object Segmentation DAVIS (no YouTube-VOS training) BMVOS FPS 45.9 # 2
D16 val (G) 82.2 # 13
D16 val (J) 82.9 # 13
D16 val (F) 81.4 # 15
D17 val (G) 72.7 # 14
D17 val (J) 70.7 # 14
D17 val (F) 74.7 # 14
D17 test (G) 62.7 # 4
D17 test (J) 60.7 # 2
D17 test (F) 64.7 # 4

Methods