213 papers with code • 23 benchmarks • 35 datasets
Assigning a unique identity to entities (such as famous individuals, locations, or companies) mentioned in text (Source: Wikipedia).
Named entity recognition is a challenging task that has traditionally required large amounts of knowledge in the form of feature engineering and lexicons to achieve high performance.
First, we show that strong reading comprehension models pre-trained on large unlabeled data can be used to generalize to unseen entities.
This paper introduces a conceptually simple, scalable, and highly effective BERT-based entity linking model, along with an extensive evaluation of its accuracy-speed trade-off.
We test both task-specific and general baselines, evaluating downstream performance in addition to the ability of the models to provide provenance.
We present ELQ, a fast end-to-end entity linking model for questions, which uses a biencoder to jointly perform mention detection and linking in one pass.
FUDGE edits the graph structure by combining text segments (graph vertices) and pruning edges in an iterative fashion to obtain the final text entities and relationships.
Entity linking involves aligning textual mentions of named entities to their corresponding entries in a knowledge base.
Extraction from raw text to a knowledge base of entities and fine-grained types is often cast as prediction into a flat set of entity and type labels, neglecting the rich hierarchies over types and entities contained in curated ontologies.
To the best of our knowledge, this is the first time that plural mentions are thoroughly analyzed for these two resolution tasks.
Neural language representation models such as BERT pre-trained on large-scale corpora can well capture rich semantic patterns from plain text, and be fine-tuned to consistently improve the performance of various NLP tasks.