no code implementations • 15 Dec 2023 • François Portier, Lionel Truquet, Ikko Yamane
Many existing covariate shift adaptation methods estimate sample weights given to loss values to mitigate the gap between the source and the target distribution.
1 code implementation • 1 Feb 2022 • Takashi Ishida, Ikko Yamane, Nontawat Charoenphakdee, Gang Niu, Masashi Sugiyama
In contrast to others, our method is model-free and even instance-free.
1 code implementation • 16 Jul 2021 • Ikko Yamane, Junya Honda, Florian Yger, Masashi Sugiyama
In this paper, we consider the task of predicting $Y$ from $X$ when we have no paired data of them, but we have two separate, independent datasets of $X$ and $Y$ each observed with some mediating variable $U$, that is, we have two datasets $S_X = \{(X_i, U_i)\}$ and $S_Y = \{(U'_j, Y'_j)\}$.
no code implementations • 8 Jul 2020 • Tianyi Zhang, Ikko Yamane, Nan Lu, Masashi Sugiyama
A default assumption in many machine learning scenarios is that the training and test samples are drawn from the same probability distribution.
1 code implementation • ICML 2020 • Takashi Ishida, Ikko Yamane, Tomoya Sakai, Gang Niu, Masashi Sugiyama
We experimentally show that flooding improves performance and, as a byproduct, induces a double descent curve of the test loss.
1 code implementation • NeurIPS 2018 • Ikko Yamane, Florian Yger, Jamal Atif, Masashi Sugiyama
Uplift modeling is aimed at estimating the incremental impact of an action on an individual's behavior, which is useful in various application domains such as targeted marketing (advertisement campaigns) and personalized medicine (medical treatments).
no code implementations • 1 Aug 2015 • Ikko Yamane, Hiroaki Sasaki, Masashi Sugiyama
Log-density gradient estimation is a fundamental statistical problem and possesses various practical applications such as clustering and measuring non-Gaussianity.