Towards Semantic Consistency: Dirichlet Energy Driven Robust Multi-Modal Entity Alignment

31 Jan 2024  ·  Yuanyi Wang, Haifeng Sun, Jiabo Wang, Jingyu Wang, Wei Tang, Qi Qi, Shaoling Sun, Jianxin Liao ·

In Multi-Modal Knowledge Graphs (MMKGs), Multi-Modal Entity Alignment (MMEA) is crucial for identifying identical entities across diverse modal attributes. However, semantic inconsistency, mainly due to missing modal attributes, poses a significant challenge. Traditional approaches rely on attribute interpolation, but this often introduces modality noise, distorting the original semantics. Moreover, the lack of a universal theoretical framework limits advancements in achieving semantic consistency. This study introduces a novel approach, DESAlign, which addresses these issues by applying a theoretical framework based on Dirichlet energy to ensure semantic consistency. We discover that semantic inconsistency leads to model overfitting to modality noise, causing performance fluctuations, particularly when modalities are missing. DESAlign innovatively combats over-smoothing and interpolates absent semantics using existing modalities. Our approach includes a multi-modal knowledge graph learning strategy and a propagation technique that employs existing semantic features to compensate for missing ones, providing explicit Euler solutions. Comprehensive evaluations across 60 benchmark splits, including monolingual and bilingual scenarios, demonstrate that DESAlign surpasses existing methods, setting a new standard in performance. Further testing with high rates of missing modalities confirms its robustness, offering an effective solution to semantic inconsistency in real-world MMKGs.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here