Label Error Detection
7 papers with code • 1 benchmarks • 0 datasets
Identify labeling errors in data
Most implemented papers
Identifying Incorrect Annotations in Multi-Label Classification Data
In multi-label classification, each example in a dataset may be annotated as belonging to one or more classes (or none of the classes).
WenetSpeech: A 10000+ Hours Multi-domain Mandarin Corpus for Speech Recognition
In this paper, we present WenetSpeech, a multi-domain Mandarin corpus consisting of 10000+ hours high-quality labeled speech, 2400+ hours weakly labeled speech, and about 10000 hours unlabeled speech, with 22400+ hours in total.
Automated Detection of Label Errors in Semantic Segmentation Datasets via Deep Learning and Uncertainty Quantification
In this work, we for the first time present a method for detecting label errors in image datasets with semantic segmentation, i. e., pixel-wise class labels.
CTRL: Clustering Training Losses for Label Error Detection
We propose a novel framework, called CTRL (Clustering TRaining Losses for label error detection), to detect label errors in multi-class datasets.
The Re-Label Method For Data-Centric Machine Learning
In industry deep learning application, our manually labeled data has a certain number of noisy data.
Identifying Label Errors in Object Detection Datasets by Loss Inspection
In this work, we for the first time introduce a benchmark for label error detection methods on object detection datasets as well as a label error detection method and a number of baselines.
AQuA: A Benchmarking Tool for Label Quality Assessment
We hope that our proposed design space and benchmark enable practitioners to choose the right tools to improve their label quality and that our benchmark enables objective and rigorous evaluation of machine learning tools facing mislabeled data.