|Trend||Dataset||Best Method||Paper title||Paper||Code||Compare|
While producing a state-of-the-art result for the i2b2 2010 task (F1 = 0. 90), our results on MedMentions are significantly lower (F1 = 0. 63), suggesting there is still plenty of opportunity for improvement on this new data.
Cross-lingual entity linking (XEL) grounds named entities in a source language to an English Knowledge Base (KB), such as Wikipedia.
Fine-grained entity typing is a challenging problem since it usually involves a relatively large tag set and may require to understand the context of the entity mention.
At test time, we classify a mention with this typing model and use soft type predictions to link the mention to the most similar candidate entity.
Contextual word representations, typically trained on unstructured, unlabeled text, do not contain any explicit grounding to real world entities and are often unable to remember facts about those entities.
SOTA for Relation Extraction on TACRED (using extra training data)
This paper describes the system that team MYTOMORROWS-TU DELFT developed for the 2019 Social Media Mining for Health Applications (SMM4H) Shared Task 3, for the end-to-end normalization of ADR tweet mentions to their corresponding MEDDRA codes.
The task is recognizing mentions of named entities in Web documents, their normalization, and cross-lingual linking.
Our neural linking models consist of three parts: a PageRank based candidate generation module, a dual-FOFE-net neural ranking model and a simple NIL entity clustering system.
Named entity recognition (NER) and entity linking (EL) are two fundamentally related tasks, since in order to perform EL, first the mentions to entities have to be detected.
#9 best model for Named Entity Recognition on CoNLL 2003 (English)