Implicit Relations
19 papers with code • 1 benchmarks • 1 datasets
Most implemented papers
Scaling Language Models: Methods, Analysis & Insights from Training Gopher
Language modelling provides a step towards intelligent communication systems by harnessing large repositories of written human knowledge to better predict and understand the world.
Commonsense for Generative Multi-Hop Question Answering Tasks
We instead focus on a more challenging multi-hop generative task (NarrativeQA), which requires the model to reason, gather, and synthesize disjoint pieces of information within the context to generate an answer.
Training Compute-Optimal Large Language Models
We investigate the optimal model size and number of tokens for training a transformer language model under a given compute budget.
Exploiting Explicit Paths for Multi-hop Reading Comprehension
To capture additional context, PathNet also composes the passage representations along each path to compute a passage-based representation.
Relation-Aware Graph Attention Network for Visual Question Answering
In order to answer semantically-complicated questions about an image, a Visual Question Answering (VQA) model needs to fully understand the visual scene in the image, especially the interactive dynamics between different objects.
Old is Gold: Linguistic Driven Approach for Entity and Relation Linking of Short Text
Short texts challenge NLP tasks such as named entity recognition, disambiguation, linking and relation inference because they do not provide sufficient context or are partially malformed (e. g. wrt.
Implicit Discourse Relation Identification for Open-domain Dialogues
Discourse relation identification has been an active area of research for many years, and the challenge of identifying implicit relations remains largely an unsolved task, especially in the context of an open-domain dialogue system.
Mining Temporal Evolution of Knowledge Graph and Genealogical Features for Literature-based Discovery Prediction
Existing techniques from Information Retrieval and Natural Language Processing attempt to identify the hidden or unpublished connections between information concepts within published literature, however, these techniques undermine the concept of predicting the future and emerging relations among scientific knowledge components encapsulated within the literature.
Local Explanation of Dialogue Response Generation
To gain insights into the reasoning process of a generation model, we propose a new method, local explanation of response generation (LERG) that regards the explanations as the mutual interaction of segments in input and output sentences.
GraFormer: Graph Convolution Transformer for 3D Pose Estimation
Exploiting relations among 2D joints plays a crucial role yet remains semi-developed in 2D-to-3D pose estimation.