Referring Segmentation in Images and Videos with Cross-Modal Self-Attention Network

9 Feb 2021  ·  Linwei Ye, Mrigank Rochan, Zhi Liu, Xiaoqin Zhang, Yang Wang ·

We consider the problem of referring segmentation in images and videos with natural language. Given an input image (or video) and a referring expression, the goal is to segment the entity referred by the expression in the image or video. In this paper, we propose a cross-modal self-attention (CMSA) module to utilize fine details of individual words and the input image or video, which effectively captures the long-range dependencies between linguistic and visual features. Our model can adaptively focus on informative words in the referring expression and important regions in the visual input. We further propose a gated multi-level fusion (GMLF) module to selectively integrate self-attentive cross-modal features corresponding to different levels of visual features. This module controls the feature fusion of information flow of features at different levels with high-level and low-level semantic information related to different attentive words. Besides, we introduce cross-frame self-attention (CFSA) module to effectively integrate temporal information in consecutive frames which extends our method in the case of referring segmentation in videos. Experiments on benchmark datasets of four referring image datasets and two actor and action video segmentation datasets consistently demonstrate that our proposed approach outperforms existing state-of-the-art methods.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Referring Expression Segmentation A2D Sentences CMSA+CFSA Precision@0.5 0.487 # 24
Precision@0.9 0.052 # 19
IoU overall 0.618 # 19
IoU mean 0.432 # 23
Precision@0.6 0.431 # 22
Precision@0.7 0.358 # 20
Precision@0.8 0.231 # 20
Referring Expression Segmentation J-HMDB CMSA+CFSA Precision@0.5 0.764 # 12
Precision@0.6 0.625 # 13
Precision@0.7 0.389 # 10
Precision@0.8 0.09 # 9
Precision@0.9 0.001 # 5
IoU overall 0.628 # 9
IoU mean 0.581 # 12

Methods


No methods listed for this paper. Add relevant methods here