LAMA

12 papers with code • 2 benchmarks • 1 datasets

This task has no description! Would you like to contribute one?

Datasets


Greatest papers with code

P-Adapters: Robustly Extracting Factual Information from Language Models with Diverse Prompts

makcedward/nlpaug 14 Oct 2021

They take LLM embeddings as input and output continuous prompts that are used to query the LLM.

LAMA

GPT Understands, Too

PaddlePaddle/PaddleNLP 18 Mar 2021

On the SuperGlue benchmark, GPTs achieve comparable and sometimes better performance to similar-sized BERTs in supervised learning.

Fine-tuning LAMA +2

Resolution-robust Large Mask Inpainting with Fourier Convolutions

saic-mdal/lama 15 Sep 2021

We find that one of the main reasons for that is the lack of an effective receptive field in both the inpainting network and the loss function.

Image Inpainting LAMA

Language Models as Knowledge Bases?

facebookresearch/LAMA IJCNLP 2019

Recent progress in pretraining language models on large textual corpora led to a surge of improvements for downstream NLP tasks.

Fine-tuning LAMA +2

AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts

ucinlp/autoprompt EMNLP 2020

The remarkable success of pretrained language models has motivated the study of what kinds of knowledge these models learn during pretraining.

LAMA Natural Language Inference +2

Knowledge Graph Based Synthetic Corpus Generation for Knowledge-Enhanced Language Model Pre-training

google-research-datasets/KELM-corpus NAACL 2021

Prior work on Data-To-Text Generation, the task of converting knowledge graph (KG) triples into natural text, focused on domain-specific benchmark datasets.

Data-to-Text Generation LAMA +1

Factual Probing Is [MASK]: Learning vs. Learning to Recall

princeton-nlp/OptiPrompt NAACL 2021

Petroni et al. (2019) demonstrated that it is possible to retrieve world facts from a pre-trained language model by expressing them as cloze-style prompts and interpret the model's prediction accuracy as a lower bound on the amount of factual information it encodes.

LAMA Language Modelling

How Can We Know What Language Models Know?

jzbjyb/LPAQA TACL 2020

Recent work has presented intriguing results examining the knowledge contained in language models (LM) by having the LM fill in the blanks of prompts such as "Obama is a _ by profession".

LAMA

Understanding by Understanding Not: Modeling Negation in Language Models

arianhosseini/negation-learning NAACL 2021

To improve language models in this regard, we propose to augment the language modeling objective with an unlikelihood objective that is based on negated generic sentences from a raw text corpus.

LAMA Language Modelling