FreeMatch: Self-adaptive Thresholding for Semi-supervised Learning

Semi-supervised Learning (SSL) has witnessed great success owing to the impressive performances brought by various methods based on pseudo labeling and consistency regularization. However, we argue that existing methods might fail to utilize the unlabeled data more effectively since they either use a pre-defined / fixed threshold or an ad-hoc threshold adjusting scheme, resulting in inferior performance and slow convergence. We first analyze a motivating example to obtain intuitions on the relationship between the desirable threshold and model's learning status. Based on the analysis, we hence propose FreeMatch to adjust the confidence threshold in a self-adaptive manner according to the model's learning status. We further introduce a self-adaptive class fairness regularization penalty to encourage the model for diverse predictions during the early training stage. Extensive experiments indicate the superiority of FreeMatch especially when the labeled data are extremely rare. FreeMatch achieves 5.78%, 13.59%, and 1.28% error rate reduction over the latest state-of-the-art method FlexMatch on CIFAR-10 with 1 label per class, STL-10 with 4 labels per class, and ImageNet with 100 labels per class, respectively. Moreover, FreeMatch can also boost the performance of imbalanced SSL. The codes can be found at

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Semi-Supervised Image Classification cifar-100, 10000 Labels FreeMatch Percentage error 21.68 # 7
Semi-Supervised Image Classification CIFAR-100, 2500 Labels FreeMatch Percentage error 26.47 # 6
Semi-Supervised Image Classification CIFAR-100, 400 Labels FreeMatch Percentage error 37.98 # 6
Semi-Supervised Image Classification CIFAR-10, 250 Labels FreeMatch Percentage error 4.88 # 10
Semi-Supervised Image Classification CIFAR-10, 40 Labels FreeMatch Percentage error 4.9 # 1


No methods listed for this paper. Add relevant methods here