Transductive Learning
39 papers with code • 0 benchmarks • 0 datasets
In this setting, both a labeled training sample and an (unlabeled) test sample are provided at training time. The goal is to predict only the labels of the given test instances as accurately as possible.
Benchmarks
These leaderboards are used to track progress in Transductive Learning
Libraries
Use these libraries to find Transductive Learning models and implementationsMost implemented papers
Geom-GCN: Geometric Graph Convolutional Networks
From the observations on classical neural network and network geometry, we propose a novel geometric aggregation scheme for graph neural networks to overcome the two weaknesses.
DINE: Domain Adaptation from Single and Multiple Black-box Predictors
To ease the burden of labeling, unsupervised domain adaptation (UDA) aims to transfer knowledge in previous and related labeled datasets (sources) to a new unlabeled dataset (target).
Semi-Supervised Domain Generalizable Person Re-Identification
Instead, we aim to explore multiple labeled datasets to learn generalized domain-invariant representations for person re-id, which is expected universally effective for each new-coming re-id scenario.
On Label-Efficient Computer Vision: Building Fast and Effective Few-Shot Image Classifiers
The first method, Simple CNAPS, employs a hierarchically regularized Mahalanobis-distance based classifier combined with a state of the art neural adaptive feature extractor to achieve strong performance on Meta-Dataset, mini-ImageNet and tiered-ImageNet benchmarks.
Beyond Simple Meta-Learning: Multi-Purpose Models for Multi-Domain, Active and Continual Few-Shot Learning
The first method, Simple CNAPS, employs a hierarchically regularized Mahalanobis-distance based classifier combined with a state of the art neural adaptive feature extractor to achieve strong performance on Meta-Dataset, mini-ImageNet and tiered-ImageNet benchmarks.
Transductive Active Learning: Theory and Applications
We study a generalization of classical active learning to real-world settings with concrete prediction targets where sampling is restricted to an accessible region of the domain, while prediction targets may lie outside this region.
Unsupervised Tube Extraction Using Transductive Learning and Dense Trajectories
The combination of appearance-based static ''objectness'' (Selective Search), motion information (Dense Trajectories) and transductive learning (detectors are forced to "overfit" on the unsupervised data used for training) makes the proposed approach extremely robust.
Identifying Key Sentences for Precision Oncology Using Semi-Supervised Learning
For obtaining a realistic classification model, we propose the use of abstracts summarised in relevant sentences as unlabelled examples through Self-Training.
Label Propagation for Deep Semi-supervised Learning
In this work, we employ a transductive label propagation method that is based on the manifold assumption to make predictions on the entire dataset and use these predictions to generate pseudo-labels for the unlabeled data and train a deep neural network.
Learning to learn via Self-Critique
In this paper, we propose a framework called Self-Critique and Adapt or SCA, which learns to learn a label-free loss function, parameterized as a neural network.