One-shot Unsupervised Domain Adaptation
7 papers with code • 2 benchmarks • 2 datasets
Most implemented papers
Adversarial Style Mining for One-Shot Unsupervised Domain Adaptation
We aim at the problem named One-Shot Unsupervised Domain Adaptation.
Style Mixing and Patchwise Prototypical Matching for One-Shot Unsupervised Domain Adaptive Semantic Segmentation
In this paper, we tackle the problem of one-shot unsupervised domain adaptation (OSUDA) for semantic segmentation where the segmentors only see one unlabeled target image during training.
Semantic Self-adaptation: Enhancing Generalization with a Single Sample
The lack of out-of-domain generalization is a critical weakness of deep networks for semantic segmentation.
PODA: Prompt-driven Zero-shot Domain Adaptation
In this paper, we propose the task of 'Prompt-driven Zero-shot Domain Adaptation', where we adapt a model trained on a source domain using only a general description in natural language of the target domain, i. e., a prompt.
One-shot Unsupervised Domain Adaptation with Personalized Diffusion Models
Departing from the common notion of transferring only the target ``texture'' information, we leverage text-to-image diffusion models (e. g., Stable Diffusion) to generate a synthetic target dataset with photo-realistic images that not only faithfully depict the style of the target domain, but are also characterized by novel scenes in diverse contexts.
Learnable Data Augmentation for One-Shot Unsupervised Domain Adaptation
This paper presents a classification framework based on learnable data augmentation to tackle the One-Shot Unsupervised Domain Adaptation (OS-UDA) problem.
Domain Adaptation with a Single Vision-Language Embedding
Domain adaptation has been extensively investigated in computer vision but still requires access to target data at the training time, which might be difficult to obtain in some uncommon conditions.