Revocable Deep Reinforcement Learning with Affinity Regularization for Outlier-Robust Graph Matching

16 Dec 2020  ·  Chang Liu, Runzhong Wang, Zetian Jiang, Junchi Yan, Lingxiao Huang, Pinyan Lu ·

Graph matching (GM) has been a building block in many areas including computer vision and pattern recognition. Despite the recent impressive progress, existing deep GM methods often have difficulty in handling outliers in both graphs, which are ubiquitous in practice. We propose a deep reinforcement learning (RL) based approach RGM for weighted graph matching, whose sequential node matching scheme naturally fits with the strategy for selective inlier matching against outliers. A revocable action scheme is devised to improve the agent's flexibility against the complex constrained matching task. Moreover, we propose a quadratic approximation technique to regularize the affinity matrix, in the presence of outliers. As such, the RL agent can finish inlier matching timely when the objective score stops growing, for which otherwise an additional hyperparameter i.e. the number of common inliers is needed to avoid matching outliers. In this paper, we focus on learning the back-end solver for the most general form of GM: the Lawler's QAP, whose input is the affinity matrix. Our approach can also boost other solvers using the affinity input. Experimental results on both synthetic and real-world datasets showcase its superior performance regarding both matching accuracy and robustness.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here