Adapting Neural Link Predictors for Data-Efficient Complex Query Answering

NeurIPS 2023  ·  Erik Arakelyan, Pasquale Minervini, Daniel Daza, Michael Cochez, Isabelle Augenstein ·

Answering complex queries on incomplete knowledge graphs is a challenging task where a model needs to answer complex logical queries in the presence of missing knowledge. Prior work in the literature has proposed to address this problem by designing architectures trained end-to-end for the complex query answering task with a reasoning process that is hard to interpret while requiring data and resource-intensive training. Other lines of research have proposed re-using simple neural link predictors to answer complex queries, reducing the amount of training data by orders of magnitude while providing interpretable answers. The neural link predictor used in such approaches is not explicitly optimised for the complex query answering task, implying that its scores are not calibrated to interact together. We propose to address these problems via CQD$^{\mathcal{A}}$, a parameter-efficient score \emph{adaptation} model optimised to re-calibrate neural link prediction scores for the complex query answering task. While the neural link predictor is frozen, the adaptation component -- which only increases the number of model parameters by $0.03\%$ -- is trained on the downstream complex query answering task. Furthermore, the calibration component enables us to support reasoning over queries that include atomic negations, which was previously impossible with link predictors. In our experiments, CQD$^{\mathcal{A}}$ produces significantly more accurate results than current state-of-the-art methods, improving from $34.4$ to $35.1$ Mean Reciprocal Rank values averaged across all datasets and query types while using $\leq 30\%$ of the available training query types. We further show that CQD$^{\mathcal{A}}$ is data-efficient, achieving competitive results with only $1\%$ of the training complex queries, and robust in out-of-domain evaluations.

PDF Abstract NeurIPS 2023 PDF NeurIPS 2023 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Complex Query Answering FB15k CQDA MRR 1p 0.892 # 2
MRR 2p 0.645 # 4
MRR 3p 0.579 # 3
MRR 2i 0.761 # 4
MRR 3i 0.794 # 4
MRR pi 0.701 # 2
MRR ip 0.706 # 3
MRR 2u 0.684 # 4
MRR up 0.579 # 3
Complex Query Answering FB15k-237 CQDA MRR 1p 0.467 # 2
MRR 2p 0.136 # 3
MRR 3p 0.114 # 3
MRR 2i 0.345 # 3
MRR 3i 0.483 # 4
MRR pi 0.274 # 3
MRR ip 0.209 # 2
MRR 2u 0.176 # 2
MRR up 0.114 # 3
Complex Query Answering NELL-995 CQDA MRR 1p 0.604 # 2
MRR 2p 0.229 # 2
MRR 3p 0.167 # 2
MRR 2i 0.434 # 2
MRR 3i 0.526 # 1
MRR pi 0.321 # 1
MRR ip 0.264 # 2
MRR 2u 0.200 # 2
MRR up 0.170 # 2

Methods


No methods listed for this paper. Add relevant methods here