Common Sense Reasoning

253 papers with code • 24 benchmarks • 52 datasets

Common sense reasoning tasks are intended to require the model to go beyond pattern recognition. Instead, the model should use "common sense" or world knowledge to make inferences.

VLLMs Provide Better Context for Emotion Understanding Through Common Sense Reasoning

faceonlive/ai-research 10 Apr 2024

In the first stage, we propose prompting VLLMs to generate descriptions in natural language of the subject's apparent emotion relative to the visual context.

124
10 Apr 2024

AILS-NTUA at SemEval-2024 Task 9: Cracking Brain Teasers: Transformer Models for Lateral Thinking Puzzles

giannispana/ails-ntua-at-semeval-2024-task-9-brainteaser 1 Apr 2024

In this paper, we outline our submission for the SemEval-2024 Task 9 competition: 'BRAINTEASER: A Novel Task Defying Common Sense'.

2
01 Apr 2024

Common Sense Enhanced Knowledge-based Recommendation with Large Language Model

ysh-1998/csrec 27 Mar 2024

Knowledge-based recommendation models effectively alleviate the data sparsity issue leveraging the side information in the knowledge graph, and have achieved considerable performance.

3
27 Mar 2024

Large Language Models Need Consultants for Reasoning: Becoming an Expert in a Complex Human System Through Behavior Simulation

hakys-a/meow 27 Mar 2024

In the MEOW framework, simulated data are utilized to train an expert model concentrating ``experience'' about a specific task in each independent time of simulation.

1
27 Mar 2024

IllusionVQA: A Challenging Optical Illusion Dataset for Vision Language Models

csebuetnlp/illusionvqa 23 Mar 2024

GPT4V, the best-performing VLM, achieves 62. 99% accuracy (4-shot) on the comprehension task and 49. 7% on the localization task (4-shot and Chain-of-Thought).

3
23 Mar 2024

Hierarchical Spatial Proximity Reasoning for Vision-and-Language Navigation

18979705623/hspr 18 Mar 2024

Most Vision-and-Language Navigation (VLN) algorithms tend to make decision errors, primarily due to a lack of visual common sense and insufficient reasoning capabilities.

3
18 Mar 2024

Branch-Train-MiX: Mixing Expert LLMs into a Mixture-of-Experts LLM

Leeroo-AI/mergoo 12 Mar 2024

We investigate efficient methods for training Large Language Models (LLMs) to possess capabilities in multiple specialized domains, such as coding, math reasoning and world knowledge.

169
12 Mar 2024

Hybrid Reasoning Based on Large Language Models for Autonomous Car Driving

mehdiazarafza/hybrid-reasoning 21 Feb 2024

Large Language Models (LLMs) have garnered significant attention for their ability to understand text and images, generate human-like text, and perform complex reasoning tasks.

1
21 Feb 2024

MoELoRA: Contrastive Learning Guided Mixture of Experts on Parameter-Efficient Fine-Tuning for Large Language Models

Leeroo-AI/mergoo 20 Feb 2024

Fine-tuning is often necessary to enhance the adaptability of Large Language Models (LLM) to downstream tasks.

169
20 Feb 2024

G-Retriever: Retrieval-Augmented Generation for Textual Graph Understanding and Question Answering

xiaoxinhe/g-retriever 12 Feb 2024

Given a graph with textual attributes, we enable users to `chat with their graph': that is, to ask questions about the graph using a conversational interface.

133
12 Feb 2024