no code implementations • 15 Apr 2022 • Isao Ishikawa, Takeshi Teshima, Koichi Tojo, Kenta Oono, Masahiro Ikeda, Masashi Sugiyama
Invertible neural networks (INNs) are neural network architectures with invertibility by design.
no code implementations • 19 Dec 2021 • Nan Lu, Tianyi Zhang, Tongtong Fang, Takeshi Teshima, Masashi Sugiyama
A key assumption in supervised learning is that training and test data follow the same probability distribution.
1 code implementation • 27 Feb 2021 • Takeshi Teshima, Masashi Sugiyama
Causal graphs (CGs) are compact representations of the knowledge of the data generating processes behind the data distributions.
no code implementations • 4 Dec 2020 • Takeshi Teshima, Koichi Tojo, Masahiro Ikeda, Isao Ishikawa, Kenta Oono
Neural ordinary differential equations (NODEs) is an invertible neural network architecture promising for its free-form Jacobian and the availability of a tractable Jacobian determinant estimator.
no code implementations • NeurIPS 2020 • Takeshi Teshima, Isao Ishikawa, Koichi Tojo, Kenta Oono, Masahiro Ikeda, Masashi Sugiyama
We answer this question by showing a convenient criterion: a CF-INN is universal if its layers contain affine coupling and invertible linear functions as special cases.
no code implementations • 13 Jun 2020 • Masahiro Fujisawa, Takeshi Teshima, Issei Sato, Masashi Sugiyama
Approximate Bayesian computation (ABC) is a likelihood-free inference method that has been employed in various applications.
1 code implementation • 12 Jun 2020 • Masahiro Kato, Takeshi Teshima
Density ratio estimation (DRE) is at the core of various machine learning tasks such as anomaly detection and domain adaptation.
1 code implementation • ICML 2020 • Takeshi Teshima, Issei Sato, Masashi Sugiyama
We take the structural equations in causal modeling as an example and propose a novel DA method, which is shown to be useful both theoretically and experimentally.
no code implementations • ICLR 2019 • Masahiro Kato, Takeshi Teshima, Junya Honda
However, this assumption is unrealistic in many instances of PU learning because it fails to capture the existence of a selection bias in the labeling process.
no code implementations • 13 Sep 2018 • Takeshi Teshima, Miao Xu, Issei Sato, Masashi Sugiyama
On the other hand, matrix completion (MC) methods can recover a low-rank matrix from various information deficits by using the principle of low-rank completion.