Spatiotemporal CNN for Video Object Segmentation

In this paper, we present a unified, end-to-end trainable spatiotemporal CNN model for VOS, which consists of two branches, i.e., the temporal coherence branch and the spatial segmentation branch. Specifically, the temporal coherence branch pretrained in an adversarial fashion from unlabeled video data, is designed to capture the dynamic appearance and motion cues of video sequences to guide object segmentation. The spatial segmentation branch focuses on segmenting objects accurately based on the learned appearance and motion cues. To obtain accurate segmentation results, we design a coarse-to-fine process to sequentially apply a designed attention module on multi-scale feature maps, and concatenate them to produce the final prediction. In this way, the spatial segmentation branch is enforced to gradually concentrate on object regions. These two branches are jointly fine-tuned on video segmentation sequences in an end-to-end manner. Several experiments are carried out on three challenging datasets (i.e., DAVIS-2016, DAVIS-2017 and Youtube-Object) to show that our method achieves favorable performance against the state-of-the-arts. Code is available at https://github.com/longyin880815/STCNN.

PDF Abstract CVPR 2019 PDF CVPR 2019 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Semi-Supervised Video Object Segmentation DAVIS 2016 Spatiotemporal CNN Jaccard (Mean) 83.8 # 54
F-measure (Mean) 83.8 # 54
J&F 83.8 # 53
Semi-Supervised Video Object Segmentation DAVIS 2017 (val) Spatiotemporal CNN Jaccard (Mean) 58.7 # 66
F-measure (Mean) 64.6 # 68
J&F 61.65 # 69
Semi-Supervised Video Object Segmentation DAVIS (no YouTube-VOS training) STCNN FPS 0.26 # 23
D16 val (G) 83.8 # 11
D16 val (J) 83.8 # 11
D16 val (F) 83.8 # 11
D17 val (G) 61.7 # 25
D17 val (J) 58.7 # 25
D17 val (F) 64.6 # 25
Semi-Supervised Video Object Segmentation YouTube Spatiotemporal CNN mIoU 0.796 # 2

Methods


No methods listed for this paper. Add relevant methods here