Entity Typing

88 papers with code • 7 benchmarks • 11 datasets

Entity Typing is an important task in text analysis. Assigning types (e.g., person, location, organization) to mentions of entities in documents enables effective structured analysis of unstructured text corpora. The extracted type information can be used in a wide range of ways (e.g., serving as primitives for information extraction and knowledge base (KB) completion, and assisting question answering). Traditional Entity Typing systems focus on a small set of coarse types (typically fewer than 10). Recent studies work on a much larger set of fine-grained types which form a tree-structured hierarchy (e.g., actor as a subtype of artist, and artist is a subtype of person).

Source: Label Noise Reduction in Entity Typing by Heterogeneous Partial-Label Embedding

Image Credit: Label Noise Reduction in Entity Typing by Heterogeneous Partial-Label Embedding

Libraries

Use these libraries to find Entity Typing models and implementations

Latest papers with no code

Modelling Commonsense Commonalities with Multi-Facet Concept Embeddings

no code yet • 25 Mar 2024

We show that this leads to embeddings which capture a more diverse range of commonsense properties, and consistently improves results in downstream tasks such as ultra-fine entity typing and ontology completion.

From Instructions to Constraints: Language Model Alignment with Automatic Constraint Verification

no code yet • 10 Mar 2024

We investigate common constraints in NLP tasks, categorize them into three classes based on the types of their arguments, and propose a unified framework, ACT (Aligning to ConsTraints), to automatically produce supervision signals for user alignment with constraints.

ConcEPT: Concept-Enhanced Pre-Training for Language Models

no code yet • 11 Jan 2024

In this paper, we propose ConcEPT, which stands for Concept-Enhanced Pre-Training for language models, to infuse conceptual knowledge into PLMs.

What do Deck Chairs and Sun Hats Have in Common? Uncovering Shared Properties in Large Concept Vocabularies

no code yet • 23 Oct 2023

We show that by augmenting the label set with shared properties, we can improve the performance of the state-of-the-art models for this task.

Ontology Enrichment for Effective Fine-grained Entity Typing

no code yet • 11 Oct 2023

In this study, we propose OnEFET, where we (1) enrich each node in the ontology structure with two types of extra information: instance information for training sample augmentation and topic information to relate types to contexts, and (2) develop a coarse-to-fine typing algorithm that exploits the enriched information by training an entailment model with contrasting topics and instance-based augmented training samples.

SLHCat: Mapping Wikipedia Categories and Lists to DBpedia by Leveraging Semantic, Lexical, and Hierarchical Features

no code yet • 21 Sep 2023

Assigning DBPedia classes to Wikipedia categories and lists can alleviate the problem, realizing a large knowledge graph which is essential for categorizing digital contents through entity linking and typing.

AsyncET: Asynchronous Learning for Knowledge Graph Entity Typing with Auxiliary Relations

no code yet • 30 Aug 2023

Previously, KG embedding (KGE) methods tried to solve the KGET task by introducing an auxiliary relation, 'hasType', to model the relationship between entities and their types.

Ultra-Fine Entity Typing with Prior Knowledge about Labels: A Simple Clustering Based Strategy

no code yet • 22 May 2023

In this paper, we show that the performance of existing methods can be improved using a simple technique: we use pre-trained label embeddings to cluster the labels into semantic domains and then treat these domains as additional types.

OntoType: Ontology-Guided Zero-Shot Fine-Grained Entity Typing with Weak Supervision from Pre-Trained Language Models

no code yet • 21 May 2023

In this study, we vision that an ontology provides a semantics-rich, hierarchical structure, which will help select the best results generated by multiple PLM models and head words.

UNTER: A Unified Knowledge Interface for Enhancing Pre-trained Language Models

no code yet • 2 May 2023

In this paper, we propose a UNified knowledge inTERface, UNTER, to provide a unified perspective to exploit both structured knowledge and unstructured knowledge.