Differentiated Relevances Embedding for Group-based Referring Expression Comprehension

12 Mar 2022  ·  Fuhai Chen, Xuri Ge, Xiaoshuai Sun, Yue Gao, Jianzhuang Liu, Fufeng Chen, Wenjie Li ·

The key of referring expression comprehension lies in capturing the cross-modal visual-linguistic relevance. Existing works typically model the cross-modal relevance in each image, where the anchor object/expression and their positive expression/object have the same attribute as the negative expression/object, but with different attribute values. These objects/expressions are exclusively utilized to learn the implicit representation of the attribute by a pair of different values, which however impedes the accuracies of the attribute representations, expression/object representations, and their cross-modal relevances since each anchor object/expression usually has multiple attributes while each attribute usually has multiple potential values. To this end, we investigate a novel REC problem named Group-based REC, where each object/expression is simultaneously employed to construct the multiple triplets among the semantically similar images. To tackle the explosion of the negatives and the differentiation of the anchor-negative relevance scores, we propose the multi-group self-paced relevance learning schema to adaptively assign within-group object-expression pairs with different priorities based on their cross-modal relevances. Since the average cross-modal relevance varies a lot across different groups, we further design an across-group relevance constraint to balance the bias of the group priority. Experiments on three standard REC benchmarks demonstrate the effectiveness and superiority of our method.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here