188 papers with code • 0 benchmarks • 0 datasets
These leaderboards are used to track progress in World Knowledge
LibrariesUse these libraries to find World Knowledge models and implementations
By comprehensively evaluating the breadth and depth of a model's academic and professional understanding, our test can be used to analyze models across many tasks and to identify important shortcomings.
Large pre-trained language models have been shown to store factual knowledge in their parameters, and achieve state-of-the-art results when fine-tuned on downstream NLP tasks.
Existing methods for gaining such steerability collect human labels of the relative quality of model generations and fine-tune the unsupervised LM to align with these preferences, often with reinforcement learning from human feedback (RLHF).
Language model pre-training has been shown to capture a surprising amount of world knowledge, crucial for NLP tasks such as question answering.
MEIM: Multi-partition Embedding Interaction Beyond Block Term Format for Efficient and Expressive Link Prediction
Knowledge graph embedding aims to predict the missing relations between entities in knowledge graphs.
To investigate question answering with prior knowledge, we present CommonsenseQA: a challenging new dataset for commonsense question answering.
We create a new NLI test set that shows the deficiency of state-of-the-art models in inferences that require lexical and world knowledge.