Entity Typing
75 papers with code • 7 benchmarks • 10 datasets
Entity Typing is an important task in text analysis. Assigning types (e.g., person, location, organization) to mentions of entities in documents enables effective structured analysis of unstructured text corpora. The extracted type information can be used in a wide range of ways (e.g., serving as primitives for information extraction and knowledge base (KB) completion, and assisting question answering). Traditional Entity Typing systems focus on a small set of coarse types (typically fewer than 10). Recent studies work on a much larger set of fine-grained types which form a tree-structured hierarchy (e.g., actor as a subtype of artist, and artist is a subtype of person).
Source: Label Noise Reduction in Entity Typing by Heterogeneous Partial-Label Embedding
Image Credit: Label Noise Reduction in Entity Typing by Heterogeneous Partial-Label Embedding
Libraries
Use these libraries to find Entity Typing models and implementationsMost implemented papers
LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention
In this paper, we propose new pretrained contextualized representations of words and entities based on the bidirectional transformer.
Semantic Relation Classification via Bidirectional LSTM Networks with Entity-aware Attention using Latent Entity Typing
Our model not only utilizes entities and their latent types as features effectively but also is more interpretable by visualizing attention mechanisms applied to our model and results of LET.
Label Noise Reduction in Entity Typing by Heterogeneous Partial-Label Embedding
Current systems of fine-grained entity typing use distant supervision in conjunction with existing knowledge bases to assign categories (type labels) to entity mentions.
Representation Learning of Entities and Documents from Knowledge Base Descriptions
In this paper, we describe TextEnt, a neural network model that learns distributed representations of entities and documents directly from a knowledge base (KB).
Hierarchical Losses and New Resources for Fine-grained Entity Typing and Linking
Extraction from raw text to a knowledge base of entities and fine-grained types is often cast as prediction into a flat set of entity and type labels, neglecting the rich hierarchies over types and entities contained in curated ontologies.
ERNIE: Enhanced Language Representation with Informative Entities
Neural language representation models such as BERT pre-trained on large-scale corpora can well capture rich semantic patterns from plain text, and be fine-tuned to consistently improve the performance of various NLP tasks.
EntEval: A Holistic Evaluation Benchmark for Entity Representations
Rich entity representations are useful for a wide class of problems involving entities.
MTab: Matching Tabular Data to Knowledge Graph using Probability Models
This paper presents the design of our system, namely MTab, for Semantic Web Challenge on Tabular Data to Knowledge Graph Matching (SemTab 2019).
Learning to Few-Shot Learn Across Diverse Natural Language Classification Tasks
LEOPARD is trained with the state-of-the-art transformer architecture and shows better generalization to tasks not seen at all during training, with as few as 4 examples per label.
K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters
We study the problem of injecting knowledge into large pre-trained models like BERT and RoBERTa.