This paper shows that a simple baseline based on a Bag-of-Words (BoW) representation learns surprisingly good knowledge graph embeddings.
We study the problem of learning representations of entities and relations in knowledge graphs for predicting missing links.
Ranked #2 on
Link Prediction
on FB122
Generating texts which express complex ideas spanning multiple sentences requires a structured representation of their content (document plan), but these representations are prohibitively expensive to manually produce.
Ranked #6 on
KG-to-Text Generation
on AGENDA
Furthermore, combining the JK framework with models like Graph Convolutional Networks, GraphSAGE and Graph Attention Networks consistently improves those models' performance.
Ranked #14 on
Node Classification
on PPI
The task becomes more challenging on temporal knowledge graphs, where each fact is associated with a timestamp.
This work presents Contextualized Knowledge Graph Embedding (CoKE), a novel paradigm that takes into account such contextual nature, and learns dynamic, flexible, and fully contextualized entity and relation embeddings.
We present a new dataset of Wikipedia articles each paired with a knowledge graph, to facilitate the research in conditional text generation, graph generation and graph representation learning.
Ranked #1 on
KG-to-Text Generation
on WikiGraphs
AI legal assistants based on Large Language Models (LLMs) can provide accessible legal consulting services, but the hallucination problem poses potential legal risks.
In this paper we provide a comprehensive introduction to knowledge graphs, which have recently garnered significant attention from both industry and academia in scenarios that require exploiting diverse, dynamic, large-scale collections of data.
The extraction of aspect terms is a critical step in fine-grained sentiment analysis of text.