The Devil is in Temporal Token: High Quality Video Reasoning Segmentation

Existing methods for Video Reasoning Segmentation rely heavily on a single special token to represent the object in the keyframe or the entire video, inadequately capturing spatial complexity and inter-frame motion. To overcome these challenges, we propose VRS-HQ, an end-to-end video reasoning segmentation approach that leverages Multimodal Large Language Models (MLLMs) to inject rich spatiotemporal features into hierarchical tokens.Our key innovations include a Temporal Dynamic Aggregation (TDA) and a Token-driven Keyframe Selection (TKS). Specifically, we design frame-level <SEG> and temporal-level <TAK> tokens that utilize MLLM's autoregressive learning to effectively capture both local and global information. Subsequently, we apply a similarity-based weighted fusion and frame selection strategy, then utilize SAM2 to perform keyframe segmentation and propagation. To enhance keyframe localization accuracy, the TKS filters keyframes based on SAM2's occlusion scores during inference. VRS-HQ achieves state-of-the-art performance on ReVOS, surpassing VISA by 5.9%/12.5%/9.1% in J&F scores across the three subsets. These results highlight the strong temporal reasoning and segmentation capabilities of our method. Code and model weights will be released at VRS-HQ.

PDF Abstract CVPR 2025 PDF CVPR 2025 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Referring Video Object Segmentation MeViS VRS-HQ (Chat-UniVi-13B) J&F 50.9 # 3
J 48 # 3
F 53.7 # 4
Referring Expression Segmentation Refer-YouTube-VOS (2021 public validation) VRS-HQ (Chat-UniVi-13B) J&F 71 # 2
J 69 # 2
F 73.1 # 2
Referring Video Object Segmentation ReVOS VRS-HQ (Chat-UniVi-13B) J 57.6 # 1
F 62.5 # 1
J&F 60 # 1
R 18.9 # 2
Referring Video Object Segmentation ReVOS VRS-HQ (Chat-UniVi-7B) J 56.6 # 2
F 61.6 # 2
J&F 59.1 # 2
R 19.7 # 1

Methods


No methods listed for this paper. Add relevant methods here