KAMEL : Knowledge Analysis with Multitoken Entities in Language Models

Large language models (LMs) have been shown to capture large amounts of relational knowledge from the pre-training corpus. These models can be probed for this factual knowledge by using cloze-style prompts as demonstrated on the LAMA benchmark. However, recent studies have uncovered that results only perform well, because the models are good at performing educated guesses or recalling facts from the training data. We present a novel Wikidata-based benchmark dataset, KAMEL , for probing relational knowledge in LMs. In contrast to previous datasets, it covers a broader range of knowledge, probes for single-, and multi-token entities, and contains facts with literal values. Furthermore, the evaluation procedure is more accurate, since the dataset contains alternative entity labels and deals with higher-cardinality relations. Instead of performing the evaluation on masked language models, we present results for a variety of recent causal LMs in a few-shot setting. We show that indeed novel models perform very well on LAMA, achieving a promising F1-score of 52.90%, while only achieving 17.62% on KAMEL. Our analysis shows that even large language models are far from being able to memorize all varieties of relational knowledge that is usually stored knowledge graphs.

PDF

Datasets


Introduced in the Paper:

KAMEL

Used in the Paper:

LAMA T-REx BioLAMA KMIR

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Probing Language Models KAMEL OPT-13b Average F1 17.62 # 1

Methods