Search Results for author: Takeshi Teshima

Found 10 papers, 3 papers with code

Rethinking Importance Weighting for Transfer Learning

no code implementations19 Dec 2021 Nan Lu, Tianyi Zhang, Tongtong Fang, Takeshi Teshima, Masashi Sugiyama

A key assumption in supervised learning is that training and test data follow the same probability distribution.

Selection bias Transfer Learning

Incorporating Causal Graphical Prior Knowledge into Predictive Modeling via Simple Data Augmentation

1 code implementation27 Feb 2021 Takeshi Teshima, Masashi Sugiyama

Causal graphs (CGs) are compact representations of the knowledge of the data generating processes behind the data distributions.

Data Augmentation

Universal Approximation Property of Neural Ordinary Differential Equations

no code implementations4 Dec 2020 Takeshi Teshima, Koichi Tojo, Masahiro Ikeda, Isao Ishikawa, Kenta Oono

Neural ordinary differential equations (NODEs) is an invertible neural network architecture promising for its free-form Jacobian and the availability of a tractable Jacobian determinant estimator.

Coupling-based Invertible Neural Networks Are Universal Diffeomorphism Approximators

no code implementations NeurIPS 2020 Takeshi Teshima, Isao Ishikawa, Koichi Tojo, Kenta Oono, Masahiro Ikeda, Masashi Sugiyama

We answer this question by showing a convenient criterion: a CF-INN is universal if its layers contain affine coupling and invertible linear functions as special cases.

Image Generation Representation Learning

$γ$-ABC: Outlier-Robust Approximate Bayesian Computation Based on a Robust Divergence Estimator

no code implementations13 Jun 2020 Masahiro Fujisawa, Takeshi Teshima, Issei Sato, Masashi Sugiyama

Approximate Bayesian computation (ABC) is a likelihood-free inference method that has been employed in various applications.

Non-Negative Bregman Divergence Minimization for Deep Direct Density Ratio Estimation

1 code implementation12 Jun 2020 Masahiro Kato, Takeshi Teshima

Density ratio estimation (DRE) is at the core of various machine learning tasks such as anomaly detection and domain adaptation.

Anomaly Detection Density Ratio Estimation +2

Few-shot Domain Adaptation by Causal Mechanism Transfer

1 code implementation ICML 2020 Takeshi Teshima, Issei Sato, Masashi Sugiyama

We take the structural equations in causal modeling as an example and propose a novel DA method, which is shown to be useful both theoretically and experimentally.

Domain Adaptation

Learning from Positive and Unlabeled Data with a Selection Bias

no code implementations ICLR 2019 Masahiro Kato, Takeshi Teshima, Junya Honda

However, this assumption is unrealistic in many instances of PU learning because it fails to capture the existence of a selection bias in the labeling process.

Selection bias

Clipped Matrix Completion: A Remedy for Ceiling Effects

no code implementations13 Sep 2018 Takeshi Teshima, Miao Xu, Issei Sato, Masashi Sugiyama

On the other hand, matrix completion (MC) methods can recover a low-rank matrix from various information deficits by using the principle of low-rank completion.

Matrix Completion Recommendation Systems

Cannot find the paper you are looking for? You can Submit a new open access paper.