Improving Named Entity Recognition with Attentive Ensemble of Syntactic Information

Named entity recognition (NER) is highly sensitive to sentential syntactic and semantic properties where entities may be extracted according to how they are used and placed in the running text. To model such properties, one could rely on existing resources to providing helpful knowledge to the NER task; some existing studies proved the effectiveness of doing so, and yet are limited in appropriately leveraging the knowledge such as distinguishing the important ones for particular context. In this paper, we improve NER by leveraging different types of syntactic information through attentive ensemble, which functionalizes by the proposed key-value memory networks, syntax attention, and the gate mechanism for encoding, weighting and aggregating such syntactic information, respectively. Experimental results on six English and Chinese benchmark datasets suggest the effectiveness of the proposed model and show that it outperforms previous studies on all experiment datasets.

PDF Abstract Findings of 2020 PDF Findings of 2020 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Chinese Named Entity Recognition OntoNotes 4 AESINER F1 81.18 # 7
Named Entity Recognition (NER) Ontonotes v5 (English) AESINER F1 90.32 # 12
Chinese Named Entity Recognition Resume NER AESINER F1 96.62 # 4
Chinese Named Entity Recognition Weibo NER AESINER F1 69.78 # 6
Named Entity Recognition (NER) WNUT 2016 AESINER F1 55.14 # 3
Named Entity Recognition (NER) WNUT 2017 AESINER F1 50.68 # 11

Methods


No methods listed for this paper. Add relevant methods here