Faithful Knowledge Graph Explanations for Commonsense Reasoning

7 Oct 2023  ·  Weihe Zhai, Arkaitz Zubiaga, Bingquan Liu, Chengjie Sun, Yalong Zhao ·

While fusing language models and knowledge graphs has become common in commonsense question answering research, enabling faithful chain-of-thought explanations in these models remains an open problem. Our analysis reveals that one major weakness of current KG-based explanation methodologies lies in overlooking the faithfulness of path decoding during evaluation. This oversight leads to the distribution of the graph encoder often diverging from the original model predictions. To address this gap, we present two main contributions: (1) We propose and validate Text-GNN Fidelity in this specific context, to assess the reliability of the graph representation. (2) We introduce TeGDA (Text-Graph Distribution-aware Alignment), a novel algorithm that aligns the graph encoder with the target model to improve the faithfulness of subsequent explanations and that can be easily integrated into existing approaches. Our experiments and analysis show its potential to produce more faithful systems. Concretely, our work emphasises the neglected distributional misalignment problem in LM-KG reasoning models, which has been a latent source of spurious explanations.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods