Weakly Supervised Classification
20 papers with code • 2 benchmarks • 4 datasets
Latest papers with no code
Reducing self-supervised learning complexity improves weakly-supervised classification performance in computational pathology
Specifically, we analyzed the effects of adaptations in data volume, architecture, and algorithms on downstream classification tasks, emphasizing their impact on computational resources.
CDUL: CLIP-Driven Unsupervised Learning for Multi-Label Image Classification
Using the aggregated similarity scores as the initial pseudo labels at the training stage, we propose an optimization framework to train the parameters of the classification network and refine pseudo labels for unobserved labels.
CAMIL: Context-Aware Multiple Instance Learning for Cancer Detection and Subtyping in Whole Slide Images
The visual examination of tissue biopsy sections is fundamental for cancer diagnosis, with pathologists analyzing sections at multiple magnifications to discern tumor cells and their subtypes.
Easy Learning from Label Proportions
We consider the problem of Learning from Label Proportions (LLP), a weakly supervised classification setup where instances are grouped into "bags", and only the frequency of class labels at each bag is available.
Class-Imbalanced Complementary-Label Learning via Weighted Loss
In such scenarios, the number of samples in one class is considerably lower than in other classes, which consequently leads to a decline in the accuracy of predictions.
Weakly Supervised Classification of Vital Sign Alerts as Real or Artifact
Our weakly supervised models perform competitively with traditional supervised techniques and require less involvement from domain experts, demonstrating their use as efficient and practical alternatives to supervised learning in HC applications of ML.
Importance of Textlines in Historical Document Classification
The line-level system significantly improves results in script and font classification and in the dating task.
Self-Training with Differentiable Teacher
In self-training, the student contributes to the prediction performance, and the teacher controls the training process by generating pseudo-labels.
Weakly Supervised Classification Using Group-Level Labels
In many applications, finding adequate labeled data to train predictive models is a major challenge.
On the Robustness of Average Losses for Partial-Label Learning
Partial-label learning (PLL) utilizes instances with PLs, where a PL includes several candidate labels but only one is the true label (TL).