Search Results for author: Robert A. Marsden

Found 7 papers, 3 papers with code

Universal Test-time Adaptation through Weight Ensembling, Diversity Weighting, and Prior Correction

1 code implementation1 Jun 2023 Robert A. Marsden, Mario Döbler, Bin Yang

To tackle the problem of universal TTA, we identify and highlight several challenges a self-training based method has to deal with: 1) model bias and the occurrence of trivial solutions when performing entropy minimization on varying sequence lengths with and without multiple domain shifts, 2) loss of generalization which exacerbates the adaptation to multiple domain shifts and the occurrence of catastrophic forgetting, and 3) performance degradation due to shifts in class prior.

Test-time Adaptation

Robust Mean Teacher for Continual and Gradual Test-Time Adaptation

1 code implementation CVPR 2023 Mario Döbler, Robert A. Marsden, Bin Yang

We demonstrate the effectiveness of our proposed method 'robust mean teacher' (RMT) on the continual and gradual corruption benchmarks CIFAR10C, CIFAR100C, and Imagenet-C. We further consider ImageNet-R and propose a new continual DomainNet-126 benchmark.

Contrastive Learning Test-time Adaptation

Introducing Intermediate Domains for Effective Self-Training during Test-Time

1 code implementation16 Aug 2022 Robert A. Marsden, Mario Döbler, Bin Yang

In this work, we address two problems that exist when applying self-training in the setting of test-time adaptation.

Scene Segmentation Style Transfer +2

Continual Unsupervised Domain Adaptation for Semantic Segmentation using a Class-Specific Transfer

no code implementations12 Aug 2022 Robert A. Marsden, Felix Wiewel, Mario Döbler, Yang Yang, Bin Yang

In this work, we focus on UDA and additionally address the case of adapting not only to a single domain, but to a sequence of target domains.

Data Augmentation Semantic Segmentation +2

Contrastive Learning and Self-Training for Unsupervised Domain Adaptation in Semantic Segmentation

no code implementations5 May 2021 Robert A. Marsden, Alexander Bartler, Mario Döbler, Bin Yang

To avoid the costly annotation of training data for unseen domains, unsupervised domain adaptation (UDA) attempts to provide efficient knowledge transfer from a labeled source domain to an unlabeled target domain.

Contrastive Learning Semantic Segmentation +3

Cannot find the paper you are looking for? You can Submit a new open access paper.