Entity Typing

87 papers with code • 7 benchmarks • 11 datasets

Entity Typing is an important task in text analysis. Assigning types (e.g., person, location, organization) to mentions of entities in documents enables effective structured analysis of unstructured text corpora. The extracted type information can be used in a wide range of ways (e.g., serving as primitives for information extraction and knowledge base (KB) completion, and assisting question answering). Traditional Entity Typing systems focus on a small set of coarse types (typically fewer than 10). Recent studies work on a much larger set of fine-grained types which form a tree-structured hierarchy (e.g., actor as a subtype of artist, and artist is a subtype of person).

Source: Label Noise Reduction in Entity Typing by Heterogeneous Partial-Label Embedding

Image Credit: Label Noise Reduction in Entity Typing by Heterogeneous Partial-Label Embedding

Libraries

Use these libraries to find Entity Typing models and implementations

Most implemented papers

LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention

studio-ousia/luke EMNLP 2020

In this paper, we propose new pretrained contextualized representations of words and entities based on the bidirectional transformer.

Semantic Relation Classification via Bidirectional LSTM Networks with Entity-aware Attention using Latent Entity Typing

roomylee/entity-aware-relation-classification 23 Jan 2019

Our model not only utilizes entities and their latent types as features effectively but also is more interpretable by visualizing attention mechanisms applied to our model and results of LET.

Label Noise Reduction in Entity Typing by Heterogeneous Partial-Label Embedding

shanzhenren/PLE 17 Feb 2016

Current systems of fine-grained entity typing use distant supervision in conjunction with existing knowledge bases to assign categories (type labels) to entity mentions.

Representation Learning of Entities and Documents from Knowledge Base Descriptions

wikipedia2vec/wikipedia2vec COLING 2018

In this paper, we describe TextEnt, a neural network model that learns distributed representations of entities and documents directly from a knowledge base (KB).

Hierarchical Losses and New Resources for Fine-grained Entity Typing and Linking

chanzuckerberg/MedMentions ACL 2018

Extraction from raw text to a knowledge base of entities and fine-grained types is often cast as prediction into a flat set of entity and type labels, neglecting the rich hierarchies over types and entities contained in curated ontologies.

ERNIE: Enhanced Language Representation with Informative Entities

thunlp/ERNIE ACL 2019

Neural language representation models such as BERT pre-trained on large-scale corpora can well capture rich semantic patterns from plain text, and be fine-tuned to consistently improve the performance of various NLP tasks.

EntEval: A Holistic Evaluation Benchmark for Entity Representations

ZeweiChu/EntEval IJCNLP 2019

Rich entity representations are useful for a wide class of problems involving entities.

MTab: Matching Tabular Data to Knowledge Graph using Probability Models

phucty/mtab_tool 1 Oct 2019

This paper presents the design of our system, namely MTab, for Semantic Web Challenge on Tabular Data to Knowledge Graph Matching (SemTab 2019).

Learning to Few-Shot Learn Across Diverse Natural Language Classification Tasks

iesl/leopard COLING 2020

LEOPARD is trained with the state-of-the-art transformer architecture and shows better generalization to tasks not seen at all during training, with as few as 4 examples per label.

K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters

microsoft/K-Adapter Findings (ACL) 2021

We study the problem of injecting knowledge into large pre-trained models like BERT and RoBERTa.