Transductive Learning

28 papers with code • 0 benchmarks • 0 datasets

In this setting, both a labeled training sample and an (unlabeled) test sample are provided at training time. The goal is to predict only the labels of the given test instances as accurately as possible.

Libraries

Use these libraries to find Transductive Learning models and implementations

Most implemented papers

Geom-GCN: Geometric Graph Convolutional Networks

graphdml-uiuc-jlu/geom-gcn ICLR 2020

From the observations on classical neural network and network geometry, we propose a novel geometric aggregation scheme for graph neural networks to overcome the two weaknesses.

DINE: Domain Adaptation from Single and Multiple Black-box Predictors

tim-learn/dine CVPR 2022

To ease the burden of labeling, unsupervised domain adaptation (UDA) aims to transfer knowledge in previous and related labeled datasets (sources) to a new unlabeled dataset (target).

Semi-Supervised Domain Generalizable Person Re-Identification

JDAI-CV/fast-reid 11 Aug 2021

Instead, we aim to explore multiple labeled datasets to learn generalized domain-invariant representations for person re-id, which is expected universally effective for each new-coming re-id scenario.

On Label-Efficient Computer Vision: Building Fast and Effective Few-Shot Image Classifiers

plai-group/simple-cnaps University of British Columbia Theses and Dissertations 2021

The first method, Simple CNAPS, employs a hierarchically regularized Mahalanobis-distance based classifier combined with a state of the art neural adaptive feature extractor to achieve strong performance on Meta-Dataset, mini-ImageNet and tiered-ImageNet benchmarks.

Beyond Simple Meta-Learning: Multi-Purpose Models for Multi-Domain, Active and Continual Few-Shot Learning

plai-group/simple-cnaps 13 Jan 2022

The first method, Simple CNAPS, employs a hierarchically regularized Mahalanobis-distance based classifier combined with a state of the art neural adaptive feature extractor to achieve strong performance on Meta-Dataset, mini-ImageNet and tiered-ImageNet benchmarks.

Unsupervised Tube Extraction Using Transductive Learning and Dense Trajectories

mihaipuscas/unsupervised-tube-extraction ICCV 2015

The combination of appearance-based static ''objectness'' (Selective Search), motion information (Dense Trajectories) and transductive learning (detectors are forced to "overfit" on the unsupervised data used for training) makes the proposed approach extremely robust.

Identifying Key Sentences for Precision Oncology Using Semi-Supervised Learning

nachne/semisuper WS 2018

For obtaining a realistic classification model, we propose the use of abstracts summarised in relevant sentences as unlabelled examples through Self-Training.

Label Propagation for Deep Semi-supervised Learning

ahmetius/LP-DeepSSL CVPR 2019

In this work, we employ a transductive label propagation method that is based on the manifold assumption to make predictions on the entire dataset and use these predictions to generate pseudo-labels for the unlabeled data and train a deep neural network.

Learning to learn via Self-Critique

AntreasAntoniou/Learning_to_Learn_via_Self-Critique 24 May 2019

In this paper, we propose a framework called Self-Critique and Adapt or SCA, which learns to learn a label-free loss function, parameterized as a neural network.

Generating Accurate Pseudo-labels in Semi-Supervised Learning and Avoiding Overconfident Predictions via Hermite Polynomial Activations

lokhande-vishnu/DeepHermites CVPR 2020

Rectified Linear Units (ReLUs) are among the most widely used activation function in a broad variety of tasks in vision.