Joint Inductive and Transductive Learning for Video Object Segmentation

ICCV 2021  ·  Yunyao Mao, Ning Wang, Wengang Zhou, Houqiang Li ·

Semi-supervised video object segmentation is a task of segmenting the target object in a video sequence given only a mask annotation in the first frame. The limited information available makes it an extremely challenging task. Most previous best-performing methods adopt matching-based transductive reasoning or online inductive learning. Nevertheless, they are either less discriminative for similar instances or insufficient in the utilization of spatio-temporal information. In this work, we propose to integrate transductive and inductive learning into a unified framework to exploit the complementarity between them for accurate and robust video object segmentation. The proposed approach consists of two functional branches. The transduction branch adopts a lightweight transformer architecture to aggregate rich spatio-temporal cues while the induction branch performs online inductive learning to obtain discriminative target information. To bridge these two diverse branches, a two-head label encoder is introduced to learn the suitable target prior for each of them. The generated mask encodings are further forced to be disentangled to better retain their complementarity. Extensive experiments on several prevalent benchmarks show that, without the need of synthetic training data, the proposed approach sets a series of new state-of-the-art records. Code is available at https://github.com/maoyunyao/JOINT.

PDF Abstract ICCV 2021 PDF ICCV 2021 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Semi-Supervised Video Object Segmentation DAVIS 2017 (val) JOINT Jaccard (Mean) 76.0 # 43
F-measure (Mean) 81.2 # 45
J&F 78.6 # 45
Semi-Supervised Video Object Segmentation DAVIS (no YouTube-VOS training) JOINT FPS 4.00 # 19
D17 val (G) 78.6 # 4
D17 val (J) 76.0 # 4
D17 val (F) 81.2 # 5

Methods


No methods listed for this paper. Add relevant methods here