Relational Reasoning
149 papers with code • 1 benchmarks • 12 datasets
The goal of Relational Reasoning is to figure out the relationships among different entities, such as image pixels, words or sentences, human skeletons or interactive moving agents.
Libraries
Use these libraries to find Relational Reasoning models and implementationsDatasets
Latest papers with no code
Interactive Autonomous Navigation with Internal State Inference and Interactivity Estimation
Moreover, we propose an interactivity estimation mechanism based on the difference between predicted trajectories in these two situations, which indicates the degree of influence of the ego agent on other agents.
Large Language Models can Learn Rules
In the deduction stage, the LLM is then prompted to employ the learned rule library to perform reasoning to answer test questions.
A Novel Neural-symbolic System under Statistical Relational Learning
A key objective in field of artificial intelligence is to develop cognitive models that can exhibit human-like intellectual capabilities.
Quantifying and Attributing the Hallucination of Large Language Models via Association Analysis
Although demonstrating superb performance on various NLP tasks, large language models (LLMs) still suffer from the hallucination problem, which threatens the reliability of LLMs.
Lifted Inference beyond First-Order Logic
We expand a vast array of previous results in discrete mathematics literature on directed acyclic graphs, phylogenetic networks, etc.
CommonsenseVIS: Visualizing and Understanding Commonsense Reasoning Capabilities of Natural Language Models
Specifically, we extract relevant commonsense knowledge in inputs as references to align model behavior with human knowledge.
LightPath: Lightweight and Scalable Path Representation Learning
Next, we propose a relational reasoning framework to enable faster training of more robust sparse path encoders.
Statistical relational learning and neuro-symbolic AI: what does first-order logic offer?
In this paper, our aim is to briefly survey and articulate the logical and philosophical foundations of using (first-order) logic to represent (probabilistic) knowledge in a non-technical fashion.
Continual Reasoning: Non-Monotonic Reasoning in Neurosymbolic AI using Continual Learning
In this paper, we show that by combining a neural-symbolic system with methods from continual learning, LTN can obtain a higher level of accuracy when addressing non-monotonic reasoning tasks.
Cluster Flow: how a hierarchical clustering layer make allows deep-NNs more resilient to hacking, more human-like and easily implements relational reasoning
Mankind can even do this with objects they have never seen before.