Find First, Track Next: Decoupling Identification and Propagation in Referring Video Object Segmentation

5 Mar 2025  ยท  Suhwan Cho, Seunghoon Lee, Minhyeok Lee, Jungho Lee, Sangyoun Lee ยท

Referring video object segmentation aims to segment and track a target object in a video using a natural language prompt. Existing methods typically fuse visual and textual features in a highly entangled manner, processing multi-modal information together to generate per-frame masks. However, this approach often struggles with ambiguous target identification, particularly in scenes with multiple similar objects, and fails to ensure consistent mask propagation across frames. To address these limitations, we introduce FindTrack, a novel decoupled framework that separates target identification from mask propagation. FindTrack first adaptively selects a key frame by balancing segmentation confidence and vision-text alignment, establishing a robust reference for the target object. This reference is then utilized by a dedicated propagation module to track and segment the object across the entire video. By decoupling these processes, FindTrack effectively reduces ambiguities in target association and enhances segmentation consistency. We demonstrate that FindTrack outperforms existing methods on public benchmarks.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Referring Video Object Segmentation MeViS FindTrack J&F 48.2 # 6
J 45.6 # 4
F 50.7 # 7
Referring Video Object Segmentation Ref-DAVIS17 FindTrack J&F 74.2 # 1
J 69.9 # 1
F 78.5 # 1
Referring Video Object Segmentation Refer-YouTube-VOS FindTrack J&F 70.3 # 2
J 68.6 # 1
F 72.0 # 2

Methods


No methods listed for this paper. Add relevant methods here