Learning from Noisy Labels for Entity-Centric Information Extraction

EMNLP 2021  ·  Wenxuan Zhou, Muhao Chen ·

Recent information extraction approaches have relied on training deep neural models. However, such models can easily overfit noisy labels and suffer from performance degradation. While it is very costly to filter noisy labels in large learning resources, recent studies show that such labels take more training steps to be memorized and are more frequently forgotten than clean labels, therefore are identifiable in training. Motivated by such properties, we propose a simple co-regularization framework for entity-centric information extraction, which consists of several neural models with identical structures but different parameter initialization. These models are jointly optimized with the task-specific losses and are regularized to generate similar predictions based on an agreement loss, which prevents overfitting on noisy labels. Extensive experiments on two widely used but noisy benchmarks for information extraction, TACRED and CoNLL03, demonstrate the effectiveness of our framework. We release our code to the community for future research.

PDF Abstract EMNLP 2021 PDF EMNLP 2021 Abstract

Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Named Entity Recognition CoNLL++ Noise-robust Co-regularization + LUKE F1 95.88 # 1
Named Entity Recognition CoNLL++ Noise-robust Co-regularization + BERT-large F1 94.04 # 5
Named Entity Recognition CoNLL 2003 (English) Co-regularized LUKE F1 94.22 # 2
Relation Extraction TACRED Noise-robust Co-regularization + BERT-large F1 73.0 # 7


No methods listed for this paper. Add relevant methods here