Knowledge Probing

19 papers with code • 6 benchmarks • 2 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

GPT Understands, Too

THUDM/P-tuning 18 Mar 2021

Prompting a pretrained language model with natural language patterns has been proved effective for natural language understanding (NLU).

LambdaKG: A Library for Pre-trained Language Model-Based Knowledge Graph Embeddings

zjunlp/promptkg 1 Oct 2022

Knowledge Graphs (KGs) often have two characteristics: heterogeneous graph structure and text-rich entity/relation information.

Can Language Models Solve Graph Problems in Natural Language?

arthur-heng/nlgraph NeurIPS 2023

We then propose Build-a-Graph Prompting and Algorithmic Prompting, two instruction-based approaches to enhance LLMs in solving natural language graph problems.

CoLAKE: Contextualized Language and Knowledge Embedding

txsun1997/CoLAKE COLING 2020

With the emerging branch of incorporating factual knowledge into pre-trained language models such as BERT, most existing models consider shallow, static, and separately pre-trained entity embeddings, which limits the performance gains of these models.

An Empirical Study on Few-shot Knowledge Probing for Pretrained Language Models

cloudygoose/fewshot_lama 6 Sep 2021

Prompt-based knowledge probing for 1-hop relations has been used to measure how much world knowledge is stored in pretrained language models.

Rewire-then-Probe: A Contrastive Recipe for Probing Biomedical Knowledge of Pre-trained Language Models

cambridgeltl/medlama ACL 2022

To catalyse the research in this direction, we release a well-curated biomedical knowledge probing benchmark, MedLAMA, which is constructed based on the Unified Medical Language System (UMLS) Metathesaurus.

DKPLM: Decomposable Knowledge-enhanced Pre-trained Language Model for Natural Language Understanding

alibaba/EasyNLP 2 Dec 2021

Knowledge-Enhanced Pre-trained Language Models (KEPLMs) are pre-trained models with relation triples injecting from knowledge graphs to improve language understanding abilities.

mGPT: Few-Shot Learners Go Multilingual

ai-forever/mgpt 15 Apr 2022

Recent studies report that autoregressive language models can successfully solve many NLP tasks via zero- and few-shot learning paradigms, which opens up new possibilities for using the pre-trained language models.

LM-CORE: Language Models with Contextually Relevant External Knowledge

sumit-research/lmcore Findings (NAACL) 2022

Large transformer-based pre-trained language models have achieved impressive performance on a variety of knowledge-intensive tasks and can capture factual knowledge in their parameters.

Calibrating Factual Knowledge in Pretrained Language Models

dqxiu/calinet 7 Oct 2022

However, we find that facts stored in the PLMs are not always correct.