Logical Reasoning
224 papers with code • 10 benchmarks • 13 datasets
Libraries
Use these libraries to find Logical Reasoning models and implementationsDatasets
Subtasks
- Navigate
- Temporal Sequences
- Novel Concepts
- StrategyQA
- StrategyQA
- Physical Intuition
- Date Understanding
- Elementary Mathematics
- Logic Grid Puzzle
- Logical Fallacy Detection
- Logical Sequence
- Epistemic Reasoning
- Analytic Entailment
- Checkmate In One
- Entailed Polarity
- Evaluating Information Essentiality
- Logical Args
- Metaphor Boolean
- Penguins In A Table
- Presuppositions As NLI
- Reasoning About Colored Objects
- College Mathematics
- Code Line Descriptions
Most implemented papers
Beta Embeddings for Multi-Hop Logical Reasoning in Knowledge Graphs
Logical operations are performed in the embedding space by neural operators over the probabilistic embeddings.
PaLM: Scaling Language Modeling with Pathways
To further our understanding of the impact of scale on few-shot learning, we trained a 540-billion parameter, densely activated, Transformer language model, which we call Pathways Language Model PaLM.
MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI
We introduce MMMU: a new benchmark designed to evaluate multimodal models on massive multi-discipline tasks demanding college-level subject knowledge and deliberate reasoning.
A Dataset and Architecture for Visual Reasoning with a Working Memory
COG is much simpler than the general problem of video analysis, yet it addresses many of the problems relating to visual and logical reasoning and memory -- problems that remain challenging for modern deep learning architectures.
SATNet: Bridging deep learning and logical reasoning using a differentiable satisfiability solver
We demonstrate that by integrating this solver into end-to-end learning systems, we can learn the logical structure of challenging problems in a minimally supervised fashion.
Neural Collaborative Reasoning
Existing Collaborative Filtering (CF) methods are mostly designed based on the idea of matching, i. e., by learning user and item embeddings from data using shallow or deep models, they try to capture the associative relevance patterns in data, so that a user embedding can be matched with relevant item embeddings using designed or learned similarity functions.
Neural Logic Reasoning
Both reasoning and generalization ability are important for prediction tasks such as recommender systems, where reasoning provides strong connection between user history and target items for accurate prediction, and generalization helps the model to draw a robust user portrait over noisy inputs.
Scaling Language Models: Methods, Analysis & Insights from Training Gopher
Language modelling provides a step towards intelligent communication systems by harnessing large repositories of written human knowledge to better predict and understand the world.
Large Language Models are Zero-Shot Reasoners
Pretrained large language models (LLMs) are widely used in many sub-fields of natural language processing (NLP) and generally known as excellent few-shot learners with task-specific exemplars.
MetaLogic: Logical Reasoning Explanations with Fine-Grained Structure
To this end, we propose a comprehensive logical reasoning explanation form.