Common Sense Reasoning

108 papers with code • 17 benchmarks • 33 datasets

Common sense reasoning tasks are intended to require the model to go beyond pattern recognition. Instead, the model should use "common sense" or world knowledge to make inferences.

Greatest papers with code

ALBERT: A Lite BERT for Self-supervised Learning of Language Representations

tensorflow/models ICLR 2020

Increasing model size when pretraining natural language representations often results in improved performance on downstream tasks.

Common Sense Reasoning Linguistic Acceptability +4

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

tensorflow/models NAACL 2019

We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers.

Common Sense Reasoning Conversational Response Selection +7

A Simple Method for Commonsense Reasoning

tensorflow/models 7 Jun 2018

Commonsense reasoning is a long-standing challenge for deep learning.

Common Sense Reasoning

mT5: A massively multilingual pre-trained text-to-text transformer

huggingface/transformers NAACL 2021

The recent "Text-to-Text Transfer Transformer" (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks.

Common Sense Reasoning Natural Language Inference +3

DeBERTa: Decoding-enhanced BERT with Disentangled Attention

huggingface/transformers ICLR 2021

Recent progress in pre-trained neural language models has significantly improved the performance of many natural language processing (NLP) tasks.

Common Sense Reasoning Coreference Resolution +11

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

huggingface/transformers arXiv 2019

Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP).

Common Sense Reasoning Language understanding +4

RoBERTa: A Robustly Optimized BERT Pretraining Approach

huggingface/transformers 26 Jul 2019

Language model pretraining has led to significant performance gains but careful comparison between different approaches is challenging.

Common Sense Reasoning Language Modelling +6

Language Models are Unsupervised Multitask Learners

huggingface/transformers Preprint 2019

Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on taskspecific datasets.

 Ranked #1 on Language Modelling on enwik8 (using extra training data)

Common Sense Reasoning Data-to-Text Generation +7

DKN: Deep Knowledge-Aware Network for News Recommendation

microsoft/recommenders 25 Jan 2018

To solve the above problems, in this paper, we propose a deep knowledge-aware network (DKN) that incorporates knowledge graph representation into news recommendation.

Click-Through Rate Prediction Common Sense Reasoning +2

Language Models are Few-Shot Learners

openai/gpt-3 NeurIPS 2020

By contrast, humans can generally perform a new language task from only a few examples or from simple instructions - something which current NLP systems still largely struggle to do.

Common Sense Reasoning Coreference Resolution +10