CrossWeigh: Training Named Entity Tagger from Imperfect Annotations

Everyone makes mistakes. So do human annotators when curating labels for named entity recognition (NER). Such label mistakes might hurt model training and interfere model comparison. In this study, we dive deep into one of the widely-adopted NER benchmark datasets, CoNLL03 NER. We are able to identify label mistakes in about 5.38% test sentences, which is a significant ratio considering that the state-of-the-art test F1 score is already around 93%. Therefore, we manually correct these label mistakes and form a cleaner test set. Our re-evaluation of popular models on this corrected test set leads to more accurate assessments, compared to those on the original test set. More importantly, we propose a simple yet effective framework, CrossWeigh, to handle label mistakes during NER model training. Specifically, it partitions the training data into several folds and train independent NER models to identify potential mistakes in each fold. Then it adjusts the weights of training data accordingly to train the final NER model. Extensive experiments demonstrate significant improvements of plugging various NER models into our proposed framework on three datasets. All implementations and corrected test set are available at our Github repo:

PDF Abstract IJCNLP 2019 PDF IJCNLP 2019 Abstract


Introduced in the Paper:


Used in the Paper:

CoNLL 2003 WNUT 2017

Results from the Paper

Ranked #3 on Named Entity Recognition (NER) on CoNLL++ (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Named Entity Recognition (NER) CoNLL++ CrossWeigh + Pooled Flair F1 94.28 # 3
Named Entity Recognition (NER) CoNLL++ Pooled Flair F1 94.13 # 4
Named Entity Recognition (NER) CoNLL 2003 (English) CrossWeigh + Pooled Flair F1 93.43 # 18
Named Entity Recognition (NER) WNUT 2017 CrossWeigh + Pooled Flair F1 50.03 # 14


No methods listed for this paper. Add relevant methods here