Relational Reasoning
149 papers with code • 1 benchmarks • 12 datasets
The goal of Relational Reasoning is to figure out the relationships among different entities, such as image pixels, words or sentences, human skeletons or interactive moving agents.
Libraries
Use these libraries to find Relational Reasoning models and implementationsDatasets
Latest papers
Learning the meanings of function words from grounded language using a visual question answering model
Furthermore, we find that these models can learn the meanings of logical connectives and and or without any prior knowledge of logical reasoning, as well as early evidence that they are sensitive to alternative expressions when interpreting language.
Anticipating Technical Expertise and Capability Evolution in Research Communities using Dynamic Graph Transformers
The ability to anticipate technical expertise and capability evolution trends globally is essential for national and global security, especially in safety-critical domains like nuclear nonproliferation (NN) and rapidly emerging fields like artificial intelligence (AI).
A Multi-Task Perspective for Link Prediction with New Relation Types and Nodes
The task of inductive link prediction in (discrete) attributed multigraphs infers missing attributed links (relations) between nodes in new test multigraphs.
Large Class Separation is not what you need for Relational Reasoning-based OOD Detection
We focus exactly on such a fine-tuning-free OOD detection setting.
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language.
Shift-Robust Molecular Relational Learning with Causal Substructure
To do so, we first assume a causal relationship based on the domain knowledge of molecular sciences and construct a structural causal model (SCM) that reveals the relationship between variables.
In-Context Analogical Reasoning with Pre-Trained Language Models
Analogical reasoning is a fundamental capacity of human cognition that allows us to reason abstractly about novel situations by relating them to past experiences.
Modularized Zero-shot VQA with Pre-trained Models
We propose a modularized zero-shot network that explicitly decomposes questions into sub reasoning steps and is highly interpretable.
Visual Causal Scene Refinement for Video Question Answering
Our VCSR involves two essential modules: i) the Question-Guided Refiner (QGR) module, which refines consecutive video frames guided by the question semantics to obtain more representative segment features for causal front-door intervention; ii) the Causal Scene Separator (CSS) module, which discovers a collection of visual causal and non-causal scenes based on the visual-linguistic causal relevance and estimates the causal effect of the scene-separating intervention in a contrastive learning manner.
Conditional Graph Information Bottleneck for Molecular Relational Learning
Molecular relational learning, whose goal is to learn the interaction behavior between molecular pairs, got a surge of interest in molecular sciences due to its wide range of applications.