SubRegWeigh: Effective and Efficient Annotation Weighing with Subword Regularization

10 Sep 2024  ยท  Kohei Tsuji, Tatsuya Hiraoka, Yuchang Cheng, Tomoya Iwakura ยท

NLP datasets may still contain annotation errors, even when they are manually annotated. Researchers have attempted to develop methods to automatically reduce the adverse effect of errors in datasets. However, existing methods are time-consuming because they require many trained models to detect errors. This paper proposes a time-saving method that utilizes a tokenization technique called subword regularization to simulate multiple error detection models for detecting errors. Our proposed method, SubRegWeigh, can perform annotation weighting four to five times faster than the existing method. Additionally, SubRegWeigh improved performance in document classification and named entity recognition tasks. In experiments with pseudo-incorrect labels, SubRegWeigh clearly identifies pseudo-incorrect labels as annotation errors. Our code is available at https://github.com/4ldk/SubRegWeigh .

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Named Entity Recognition (NER) CoNLL++ RoBERTa + SubRegWeigh (K-means) F1 95.45 # 3
Named Entity Recognition (NER) CoNLL++ LUKE + SubRegWeigh (K-means) F1 95.27 # 4
Named Entity Recognition (NER) CoNLL 2003 (English) RoBERTa + SubRegWeigh (K-means) F1 93.81 # 10
Named Entity Recognition (NER) CoNLL 2003 (English) LUKE + SubRegWeigh (K-means) F1 94.2 # 4
Named Entity Recognition (NER) CoNLL-2020 LUKE + SubRegWeigh (K-means) F1 95.31 # 1
Named Entity Recognition (NER) CoNLL-2020 RoBERTa + SubRegWeigh (K-means) F1 94.96 # 2
Semantic Textual Similarity MRPC RoBERTa + SubRegWeigh (K-means) Accuracy 86.82% # 31
Sentiment Analysis SST-2 Binary classification RoBERTa + SubRegWeigh (K-means) Accuracy 94.84 # 28
Named Entity Recognition (NER) WNUT 2017 RoBERTa + SubRegWeigh (K-means) F1 60.29 # 2

Methods


No methods listed for this paper. Add relevant methods here