CIFAR-100N (Real-World Human Annotations)

Introduced by Wei et al. in Learning with Noisy Labels Revisited: A Study Using Real-World Human Annotations

This work presents two new benchmark datasets (CIFAR-10N, CIFAR-100N), equipping the training dataset of CIFAR-10 and CIFAR-100 with human-annotated real-world noisy labels that we collect from Amazon Mechanical Turk.

Papers


Paper Code Results Date Stars

Tasks


Similar Datasets


License


  • Unknown

Modalities


Languages