Common Sense Reasoning
145 papers with code • 20 benchmarks • 44 datasets
Common sense reasoning tasks are intended to require the model to go beyond pattern recognition. Instead, the model should use "common sense" or world knowledge to make inferences.
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers.
Language model pretraining has led to significant performance gains but careful comparison between different approaches is challenging.
Increasing model size when pretraining natural language representations often results in improved performance on downstream tasks.
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP).
Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on taskspecific datasets.
Recent progress in pre-trained neural language models has significantly improved the performance of many natural language processing (NLP) tasks.
Neural networks trained on datasets such as ImageNet have led to major advances in visual object classification.
The recent "Text-to-Text Transfer Transformer" (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks.