Learning Referring Video Object Segmentation from Weak Annotation

4 Aug 2023  ·  Wangbo Zhao, Kepan Nan, Songyang Zhang, Kai Chen, Dahua Lin, Yang You ·

Referring video object segmentation (RVOS) is a task that aims to segment the target object in all video frames based on a sentence describing the object. Although existing RVOS methods have achieved significant performance, they depend on densely-annotated datasets, which are expensive and time-consuming to obtain. In this paper, we propose a new annotation scheme that reduces the annotation effort by 8 times, while providing sufficient supervision for RVOS. Our scheme only requires a mask for the frame where the object first appears and bounding boxes for the rest of the frames. Based on this scheme, we develop a novel RVOS method that exploits weak annotations effectively. Specifically, we build a simple but effective baseline model, SimRVOS, for RVOS with weak annotation. Then, we design a cross frame segmentation module, which uses the language-guided dynamic filters from one frame to segment the target object in other frames to thoroughly leverage the valuable mask annotation and bounding boxes. Finally, we develop a bi-level contrastive learning method to enhance the pixel-level discriminative representation of the model with weak annotation. We conduct extensive experiments to show that our method achieves comparable or even superior performance to fully-supervised methods, without requiring dense mask annotations.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods