Knowledge Probing
19 papers with code • 6 benchmarks • 2 datasets
Most implemented papers
GPT Understands, Too
Prompting a pretrained language model with natural language patterns has been proved effective for natural language understanding (NLU).
LambdaKG: A Library for Pre-trained Language Model-Based Knowledge Graph Embeddings
Knowledge Graphs (KGs) often have two characteristics: heterogeneous graph structure and text-rich entity/relation information.
Can Language Models Solve Graph Problems in Natural Language?
We then propose Build-a-Graph Prompting and Algorithmic Prompting, two instruction-based approaches to enhance LLMs in solving natural language graph problems.
CoLAKE: Contextualized Language and Knowledge Embedding
With the emerging branch of incorporating factual knowledge into pre-trained language models such as BERT, most existing models consider shallow, static, and separately pre-trained entity embeddings, which limits the performance gains of these models.
An Empirical Study on Few-shot Knowledge Probing for Pretrained Language Models
Prompt-based knowledge probing for 1-hop relations has been used to measure how much world knowledge is stored in pretrained language models.
Rewire-then-Probe: A Contrastive Recipe for Probing Biomedical Knowledge of Pre-trained Language Models
To catalyse the research in this direction, we release a well-curated biomedical knowledge probing benchmark, MedLAMA, which is constructed based on the Unified Medical Language System (UMLS) Metathesaurus.
DKPLM: Decomposable Knowledge-enhanced Pre-trained Language Model for Natural Language Understanding
Knowledge-Enhanced Pre-trained Language Models (KEPLMs) are pre-trained models with relation triples injecting from knowledge graphs to improve language understanding abilities.
mGPT: Few-Shot Learners Go Multilingual
Recent studies report that autoregressive language models can successfully solve many NLP tasks via zero- and few-shot learning paradigms, which opens up new possibilities for using the pre-trained language models.
LM-CORE: Language Models with Contextually Relevant External Knowledge
Large transformer-based pre-trained language models have achieved impressive performance on a variety of knowledge-intensive tasks and can capture factual knowledge in their parameters.
Calibrating Factual Knowledge in Pretrained Language Models
However, we find that facts stored in the PLMs are not always correct.