Common Sense Reasoning

145 papers with code • 20 benchmarks • 44 datasets

Common sense reasoning tasks are intended to require the model to go beyond pattern recognition. Instead, the model should use "common sense" or world knowledge to make inferences.

Libraries

Use these libraries to find Common Sense Reasoning models and implementations

Most implemented papers

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

google-research/bert NAACL 2019

We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers.

RoBERTa: A Robustly Optimized BERT Pretraining Approach

pytorch/fairseq 26 Jul 2019

Language model pretraining has led to significant performance gains but careful comparison between different approaches is challenging.

ALBERT: A Lite BERT for Self-supervised Learning of Language Representations

google-research/ALBERT ICLR 2020

Increasing model size when pretraining natural language representations often results in improved performance on downstream tasks.

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

google-research/text-to-text-transfer-transformer arXiv 2019

Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP).

Language Models are Few-Shot Learners

openai/gpt-3 NeurIPS 2020

By contrast, humans can generally perform a new language task from only a few examples or from simple instructions - something which current NLP systems still largely struggle to do.

A Neural Conversational Model

farizrahman4u/seq2seq 19 Jun 2015

We find that this straightforward model can generate simple conversations given a large conversational training dataset.

Language Models are Unsupervised Multitask Learners

PaddlePaddle/PaddleNLP Preprint 2019

Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on taskspecific datasets.

DeBERTa: Decoding-enhanced BERT with Disentangled Attention

microsoft/DeBERTa ICLR 2021

Recent progress in pre-trained neural language models has significantly improved the performance of many natural language processing (NLP) tasks.

The "something something" video database for learning and evaluating visual common sense

jayleicn/singularity ICCV 2017

Neural networks trained on datasets such as ImageNet have led to major advances in visual object classification.

mT5: A massively multilingual pre-trained text-to-text transformer

google-research/multilingual-t5 NAACL 2021

The recent "Text-to-Text Transfer Transformer" (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks.