Video Propagation Networks

We propose a technique that propagates information forward through video data. The method is conceptually simple and can be applied to tasks that require the propagation of structured information, such as semantic labels, based on video content. We propose a 'Video Propagation Network' that processes video frames in an adaptive manner. The model is applied online: it propagates information forward without the need to access future frames. In particular we combine two components, a temporal bilateral network for dense and video adaptive filtering, followed by a spatial network to refine features and increased flexibility. We present experiments on video object segmentation and semantic video segmentation and show increased performance comparing to the best previous task-specific methods, while having favorable runtime. Additionally we demonstrate our approach on an example regression task of color propagation in a grayscale video.

PDF Abstract CVPR 2017 PDF CVPR 2017 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Semi-Supervised Video Object Segmentation DAVIS 2016 VPN Jaccard (Mean) 70.2 # 51
Jaccard (Recall) 82.3 # 30
Jaccard (Decay) 12.4 # 7
F-measure (Mean) 65.6 # 50
F-measure (Recall) 69.0 # 31
F-measure (Decay) 14.4 # 7
J&F 67.9 # 50


No methods listed for this paper. Add relevant methods here