Browse > Natural Language Processing > Relational Reasoning

Relational Reasoning

16 papers with code · Natural Language Processing

State-of-the-art leaderboards

No evaluation results yet. Help compare methods by submit evaluation metrics.

Greatest papers with code

Relational inductive biases, deep learning, and graph networks

4 Jun 2018deepmind/graph_nets

This has been due, in part, to cheap data and cheap compute resources, which have fit the natural strengths of deep learning. As a companion to this paper, we have released an open-source software library for building graph networks, with demonstrations of how to use them in practice.

DECISION MAKING RELATIONAL REASONING

Temporal Relational Reasoning in Videos

ECCV 2018 metalbubble/TRN-pytorch

Temporal relational reasoning, the ability to link meaningful transformations of objects or entities over time, is a fundamental property of intelligent species. In this paper, we introduce an effective and interpretable network module, the Temporal Relation Network (TRN), designed to learn and reason about temporal dependencies between video frames at multiple time scales.

ACTIVITY RECOGNITION COMMON SENSE REASONING HUMAN-OBJECT INTERACTION DETECTION RELATIONAL REASONING

A simple neural network module for relational reasoning

NeurIPS 2017 gitlimlab/Relation-Network-Tensorflow

Relational reasoning is a central component of generally intelligent behavior, but has proven difficult for neural networks to learn. In this paper we describe how to use Relation Networks (RNs) as a simple plug-and-play module to solve problems that fundamentally hinge on relational reasoning.

QUESTION ANSWERING RELATIONAL REASONING VISUAL QUESTION ANSWERING

Relational recurrent neural networks

NeurIPS 2018 L0SG/relational-rnn-pytorch

Memory-based neural networks model temporal data by leveraging an ability to remember information for long periods. It is unclear, however, whether they also have an ability to perform complex relational reasoning with the information they remember.

LANGUAGE MODELLING RELATIONAL REASONING

Knowledge Graph Completion via Complex Tensor Factorization

22 Feb 2017ttrouill/complex

In statistical relational learning, knowledge graph completion deals with automatically understanding the structure of large knowledge graphs---labeled directed graphs---and predicting missing relationships---labeled edges. State-of-the-art embedding models propose different trade-offs between modeling expressiveness, and time and space complexity.

KNOWLEDGE GRAPH COMPLETION LINK PREDICTION RELATIONAL REASONING

Holographic Embeddings of Knowledge Graphs

16 Oct 2015mnick/holographic-embeddings

Learning embeddings of entities and relations is an efficient and versatile method to perform machine learning on relational data such as knowledge graphs. In this work, we propose holographic embeddings (HolE) to learn compositional vector space representations of entire knowledge graphs.

KNOWLEDGE GRAPHS LINK PREDICTION RELATIONAL REASONING

Recurrent Relational Networks

NeurIPS 2018 rasmusbergpalm/recurrent-relational-networks

We achieve state of the art results on the bAbI textual question-answering dataset with the recurrent relational network, consistently solving 20/20 tasks. As bAbI is not particularly challenging from a relational reasoning point of view, we introduce Pretty-CLEVR, a new diagnostic dataset for relational reasoning.

QUESTION ANSWERING RELATIONAL REASONING

One-Shot Relational Learning for Knowledge Graphs

EMNLP 2018 xwhan/One-shot-Relational-Learning

Knowledge graphs (KGs) are the key components of various natural language processing applications. To further expand KGs' coverage, previous studies on knowledge graph completion usually require a large number of training instances for each relation.

KNOWLEDGE GRAPH COMPLETION RELATIONAL REASONING

Adversarial Sets for Regularising Neural Link Predictors

24 Jul 2017uclmr/inferbeddings

The training objective is defined as a minimax problem, where an adversary finds the most offending adversarial examples by maximising the inconsistency loss, and the model is trained by jointly minimising a supervised loss and the inconsistency loss on the adversarial examples. This yields the first method that can use function-free Horn clauses (as in Datalog) to regularise any neural link predictor, with complexity independent of the domain size.

LINK PREDICTION RELATIONAL REASONING

Compositional Language Understanding with Text-based Relational Reasoning

7 Nov 2018koustuvsinha/clutrr

Neural networks for natural language reasoning have largely focused on extractive, fact-based question-answering (QA) and common-sense inference. However, it is also crucial to understand the extent to which neural networks can perform relational reasoning and combinatorial generalization from natural language---abilities that are often obscured by annotation artifacts and the dominance of language modeling in standard QA benchmarks.

COMMON SENSE REASONING LANGUAGE MODELLING QUESTION ANSWERING RELATIONAL REASONING