Fusing Context Into Knowledge Graph for Commonsense Question Answering

Commonsense question answering (QA) requires a model to grasp commonsense and factual knowledge to answer questions about world events. Many prior methods couple language modeling with knowledge graphs (KG). However, although a KG contains rich structural information, it lacks the context to provide a more precise understanding of the concepts. This creates a gap when fusing knowledge graphs into language modeling, especially when there is insufficient labeled data. Thus, we propose to employ external entity descriptions to provide contextual information for knowledge understanding. We retrieve descriptions of related concepts from Wiktionary and feed them as additional input to pre-trained language models. The resulting model achieves state-of-the-art result in the CommonsenseQA dataset and the best result among non-generative models in OpenBookQA.

PDF Abstract Findings (ACL) 2021 PDF Findings (ACL) 2021 Abstract

Results from the Paper


Ranked #4 on Common Sense Reasoning on CommonsenseQA (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Common Sense Reasoning CommonsenseQA DEKCOR Accuracy 83.3 # 4
Question Answering OpenBookQA TTTTT 3B Accuracy 83.2 # 16
Question Answering OpenBookQA DEKCOR Accuracy 82.4 # 19

Methods


No methods listed for this paper. Add relevant methods here