no code implementations • 19 Dec 2021 • Nan Lu, Tianyi Zhang, Tongtong Fang, Takeshi Teshima, Masashi Sugiyama
A key assumption in supervised learning is that training and test data follow the same probability distribution.
1 code implementation • NeurIPS 2020 • Tongtong Fang, Nan Lu, Gang Niu, Masashi Sugiyama
Under distribution shift (DS) where the training data distribution differs from the test one, a powerful technique is importance weighting (IW) which handles DS in two separate steps: weight estimation (WE) estimates the test-over-training density ratio and weighted classification (WC) trains the classifier from weighted training data.