Search Results for author: Xin Mao

Found 16 papers, 12 papers with code

An Effective and Efficient Entity Alignment Decoding Algorithm via Third-Order Tensor Isomorphism

1 code implementation ACL 2022 Xin Mao, Meirong Ma, Hao Yuan, Jianchao Zhu, ZongYu Wang, Rui Xie, Wei Wu, Man Lan

Entity alignment (EA) aims to discover the equivalent entity pairs between KGs, which is a crucial step for integrating multi-source KGs. For a long time, most researchers have regarded EA as a pure graph representation learning task and focused on improving graph encoders while paying little attention to the decoding process. In this paper, we propose an effective and efficient EA Decoding Algorithm via Third-order Tensor Isomorphism (DATTI). Specifically, we derive two sets of isomorphism equations: (1) Adjacency tensor isomorphism equations and (2) Gramian tensor isomorphism equations. By combining these equations, DATTI could effectively utilize the adjacency and inner correlation isomorphisms of KGs to enhance the decoding process of EA. Extensive experiments on public datasets indicate that our decoding algorithm can deliver significant performance improvements even on the most advanced EA methods, while the extra required time is less than 3 seconds.

Entity Alignment Graph Representation Learning

A Survey on Temporal Knowledge Graph: Representation Learning and Applications

no code implementations2 Mar 2024 Li Cai, Xin Mao, Yuhao Zhou, Zhaoguang Long, Changxu Wu, Man Lan

Knowledge graph representation learning aims to learn low-dimensional vector embeddings for entities and relations in a knowledge graph.

Graph Representation Learning Knowledge Graphs

Don't Forget Your Reward Values: Language Model Alignment via Value-based Calibration

1 code implementation25 Feb 2024 Xin Mao, Feng-Lin Li, Huimin Xu, Wei zhang, Anh Tuan Luu

While Reinforcement Learning from Human Feedback (RLHF) significantly enhances the generation quality of Large Language Models (LLMs), recent studies have raised concerns regarding the complexity and instability associated with the Proximal Policy Optimization (PPO) algorithm, proposing a series of order-based calibration methods as viable alternatives.

Language Modelling

Universal Multi-modal Entity Alignment via Iteratively Fusing Modality Similarity Paths

1 code implementation9 Oct 2023 Bolin Zhu, Xiaoze Liu, Xin Mao, Zhuo Chen, Lingbing Guo, Tao Gui, Qi Zhang

The objective of Entity Alignment (EA) is to identify equivalent entity pairs from multiple Knowledge Graphs (KGs) and create a more comprehensive and unified KG.

Knowledge Graphs Multi-modal Entity Alignment

An Effective and Efficient Time-aware Entity Alignment Framework via Two-aspect Three-view Label Propagation

1 code implementation12 Jul 2023 Li Cai, Xin Mao, Youshao Xiao, Changxu Wu, Man Lan

Entity alignment (EA) aims to find the equivalent entity pairs between different knowledge graphs (KGs), which is crucial to promote knowledge fusion.

Entity Alignment Knowledge Graphs

LightEA: A Scalable, Robust, and Interpretable Entity Alignment Framework via Three-view Label Propagation

2 code implementations19 Oct 2022 Xin Mao, Wenting Wang, Yuanbin Wu, Man Lan

Entity Alignment (EA) aims to find equivalent entity pairs between KGs, which is the core step of bridging and integrating multi-source KGs.

Entity Alignment

A Simple Temporal Information Matching Mechanism for Entity Alignment Between Temporal Knowledge Graphs

1 code implementation COLING 2022 Li Cai, Xin Mao, Meirong Ma, Hao Yuan, Jianchao Zhu, Man Lan

However, we believe that it is not necessary to learn the embeddings of temporal information in KGs since most TKGs have uniform temporal representations.

Entity Alignment Entity Embeddings +1

From Alignment to Assignment: Frustratingly Simple Unsupervised Entity Alignment

1 code implementation EMNLP 2021 Xin Mao, Wenting Wang, Yuanbin Wu, Man Lan

Cross-lingual entity alignment (EA) aims to find the equivalent entities between crosslingual KGs, which is a crucial step for integrating KGs.

Ranked #4 on Entity Alignment on dbp15k fr-en (using extra training data)

Entity Alignment

Are Negative Samples Necessary in Entity Alignment? An Approach with High Performance, Scalability and Robustness

1 code implementation11 Aug 2021 Xin Mao, Wenting Wang, Yuanbin Wu, Man Lan

Entity alignment (EA) aims to find the equivalent entities in different KGs, which is a crucial step in integrating multiple KGs.

Ranked #6 on Entity Alignment on dbp15k ja-en (using extra training data)

Entity Alignment Graph Sampling

Boosting the Speed of Entity Alignment 10*: Dual Attention Matching Network with Normalized Hard Sample Mining

1 code implementation29 Mar 2021 Xin Mao, Wenting Wang, Yuanbin Wu, Man Lan

Seeking the equivalent entities among multi-source Knowledge Graphs (KGs) is the pivotal step to KGs integration, also known as \emph{entity alignment} (EA).

Entity Alignment Knowledge Graphs

Relational Reflection Entity Alignment

2 code implementations18 Aug 2020 Xin Mao, Wenting Wang, Huimin Xu, Yuanbin Wu, Man Lan

Entity alignment aims to identify equivalent entity pairs from different Knowledge Graphs (KGs), which is essential in integrating multi-source KGs.

Entity Alignment Knowledge Graphs +1

MRAEA: An Efficient and Robust Entity Alignment Approach for Cross-lingual Knowledge Graph

1 code implementation The International Conference on Web Search and Data Mining (WSDM) 2020 Xin Mao, Wenting Wang, Huimin Xu, Man Lan, Yuanbin Wu

To tackle these challenges, we propose a novel Meta Relation Aware Entity Alignment (MRAEA) to directly model cross-lingual entity embeddings by attending over the node's incoming and outgoing neighbors and its connected relations' meta semantics.

Entity Alignment Entity Embeddings +2

Is Discriminator a Good Feature Extractor?

no code implementations2 Dec 2019 Xin Mao, Zhaoyu Su, Pin Siang Tan, Jun Kang Chow, Yu-Hsing Wang

From this perspective and combined with further analyses, we found that to avoid mode collapse, the features extracted by the discriminator are not guided to be different for the real samples, but divergence without noise is indeed allowed and occupies a large proportion of the feature space.

Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.