Semi-Supervised Learning in the Few-Shot Zero-Shot Scenario

27 Aug 2023  ·  Noam Fluss, Guy Hacohen, Daphna Weinshall ·

Semi-Supervised Learning (SSL) is a framework that utilizes both labeled and unlabeled data to enhance model performance. Conventional SSL methods operate under the assumption that labeled and unlabeled data share the same label space. However, in practical real-world scenarios, especially when the labeled training dataset is limited in size, some classes may be totally absent from the labeled set. To address this broader context, we propose a general approach to augment existing SSL methods, enabling them to effectively handle situations where certain classes are missing. This is achieved by introducing an additional term into their objective function, which penalizes the KL-divergence between the probability vectors of the true class frequencies and the inferred class frequencies. Our experimental results reveal significant improvements in accuracy when compared to state-of-the-art SSL, open-set SSL, and open-world SSL methods. We conducted these experiments on two benchmark image classification datasets, CIFAR-100 and STL-10, with the most remarkable improvements observed when the labeled data is severely limited, with only a few labeled examples per class

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here