|Trend||Dataset||Best Method||Paper title||Paper||Code||Compare|
This paper addresses unsupervised domain adaptation, the setting where labeled training data is available on a source domain, but the goal is to have good performance on a target domain with only unlabeled data.
In this work, we study, theoretically and empirically, the effect of the embedding complexity on generalization to the target domain.
A clustering branch is capitalized on to ensure that the learnt representation preserves such underlying structure by matching the estimated assignment distribution over clusters to the inherent cluster distribution for each target sample.
To construct a well-performing recommender offline, eliminating selection biases of the rating feedback is critical.
An effective person re-identification (re-ID) model should learn feature representations that are both discriminative, for distinguishing similar-looking people, and generalisable, for deployment across datasets without any adaptation.
In this work, we present a novel upper bound of target error to address the problem for unsupervised domain adaptation.
However, when applying the entropy minimization to UDA for semantic segmentation, the gradient of the entropy is biased towards samples that are easy to transfer.