We consider knowledge transfer and retention of the learning on growing snapshots of a KG without having to learn embeddings from scratch.
Towards a critical evaluation of embedding-based entity alignment methods, we construct a new dataset with heterogeneous relations and attributes based on event-centric KGs.
Therefore, in this work, we propose a scalable GNN-based entity alignment approach to reduce the structure and alignment loss from three perspectives.
To this end, we propose CoLE, a Co-distillation Learning method for KG Embedding that exploits the complementarity of graph structures and text information.
To avoid retraining an entire model on the whole KGs whenever new entities and triples come, we present a continual alignment method for this task.
We study dangling-aware entity alignment in knowledge graphs (KGs), which is an underexplored but important problem.
We also design a conflict resolution mechanism to resolve the alignment conflict when combining the new alignment of an aligner and that from its teacher.
Since KGs possess different sets of entities, there could be entities that cannot find alignment across them, leading to the problem of dangling entities.
Ranked #1 on Entity Alignment on DBP2.0 zh-en
In this paper, we define a typical paradigm abstracted from the existing methods, and analyze how the representation discrepancy between two potentially-aligned entities is implicitly bounded by a predefined margin in the scoring function for embedding learning.
Knowledge graph (KG) representation learning methods have achieved competitive performance in many KG-oriented tasks, among which the best ones are usually based on graph neural networks (GNNs), a powerful family of networks that learns the representation of an entity by aggregating the features of its neighbors and itself.
Capturing associations for knowledge graphs (KGs) through entity alignment, entity type inference and other related tasks benefits NLP applications with comprehensive knowledge representations.
Ranked #20 on Entity Alignment on DBP15k zh-en
We refer to such contextualized representations of a relation as edge embeddings and interpret them as translations between entity embeddings.
Recent advancement in KG embedding impels the advent of embedding-based entity alignment, which encodes entities in a continuous embedding space and measures entity similarities based on the learned embeddings.
As the direct neighbors of counterpart entities are usually dissimilar due to the schema heterogeneity, AliNet introduces distant neighbors to expand the overlap between their neighborhood structures.
Ranked #21 on Entity Alignment on DBP15k zh-en
Furthermore, we design some cross-KG inference methods to enhance the alignment between two KGs.
Moreover, triple-level learning is insufficient for the propagation of semantic information among entities, especially for the case of cross-KG embedding.
Our experimental results on real-world datasets show that this approach significantly outperforms the state-of-the-art embedding approaches for cross-lingual entity alignment and could be complemented with methods based on machine translation.