Search Results for author: Hiroaki Sasaki

Found 13 papers, 2 papers with code

Representation learning for maximization of MI, nonlinear ICA and nonlinear subspaces with robust density ratio estimation

no code implementations6 Jan 2021 Hiroaki Sasaki, Takashi Takenouchi

Then, we propose a practical method through outlier-robust density ratio estimation, which can be seen as performing maximization of MI, nonlinear ICA or nonlinear subspace estimation.

Contrastive Learning Density Ratio Estimation +1

Robust contrastive learning and nonlinear ICA in the presence of outliers

no code implementations1 Nov 2019 Hiroaki Sasaki, Takashi Takenouchi, Ricardo Monti, Aapo Hyvärinen

We develop two robust nonlinear ICA methods based on the {\gamma}-divergence, which is a robust alternative to the KL-divergence in logistic regression.

Causal Discovery Contrastive Learning +2

Robust modal regression with direct log-density derivative estimation

no code implementations18 Oct 2019 Hiroaki Sasaki, Tomoya Sakai, Takafumi Kanamori

In order to apply a gradient method for the maximization, the fundamental challenge is accurate approximation of the gradient of MRR, not MRR itself.

Density Estimation regression

Neural-Kernelized Conditional Density Estimation

no code implementations5 Jun 2018 Hiroaki Sasaki, Aapo Hyvärinen

Among existing methods, non-parametric and/or kernel-based methods are often difficult to use on large datasets, while methods based on neural networks usually make restrictive parametric assumptions on the probability densities.

Density Estimation Dimensionality Reduction +1

Nonlinear ICA Using Auxiliary Variables and Generalized Contrastive Learning

1 code implementation22 May 2018 Aapo Hyvarinen, Hiroaki Sasaki, Richard E. Turner

Here, we propose a general framework for nonlinear ICA, which, as a special case, can make use of temporal structure.

Contrastive Learning Representation Learning +2

Mode-Seeking Clustering and Density Ridge Estimation via Direct Estimation of Density-Derivative-Ratios

no code implementations6 Jul 2017 Hiroaki Sasaki, Takafumi Kanamori, Aapo Hyvärinen, Gang Niu, Masashi Sugiyama

Based on the proposed estimator, novel methods both for mode-seeking clustering and density ridge estimation are developed, and the respective convergence rates to the mode and ridge of the underlying density are also established.

Clustering Density Estimation

Whitening-Free Least-Squares Non-Gaussian Component Analysis

1 code implementation3 Mar 2016 Hiroaki Shiino, Hiroaki Sasaki, Gang Niu, Masashi Sugiyama

Non-Gaussian component analysis (NGCA) is an unsupervised linear dimension reduction method that extracts low-dimensional non-Gaussian "signals" from high-dimensional data contaminated with Gaussian noise.

Dimensionality Reduction

Non-Gaussian Component Analysis with Log-Density Gradient Estimation

no code implementations28 Jan 2016 Hiroaki Sasaki, Gang Niu, Masashi Sugiyama

Non-Gaussian component analysis (NGCA) is aimed at identifying a linear subspace such that the projected data follows a non-Gaussian distribution.

Direct Estimation of the Derivative of Quadratic Mutual Information with Application in Supervised Dimension Reduction

no code implementations5 Aug 2015 Voot Tangkaratt, Hiroaki Sasaki, Masashi Sugiyama

On the other hand, quadratic MI (QMI) is a variant of MI based on the $L_2$ distance which is more robust against outliers than the KL divergence, and a computationally efficient method to estimate QMI from data, called least-squares QMI (LSQMI), has been proposed recently.

Dimensionality Reduction

Regularized Multi-Task Learning for Multi-Dimensional Log-Density Gradient Estimation

no code implementations1 Aug 2015 Ikko Yamane, Hiroaki Sasaki, Masashi Sugiyama

Log-density gradient estimation is a fundamental statistical problem and possesses various practical applications such as clustering and measuring non-Gaussianity.

Clustering Density Estimation +1

Simultaneous Estimation of Non-Gaussian Components and their Correlation Structure

no code implementations18 Jun 2015 Hiroaki Sasaki, Michael U. Gutmann, Hayaru Shouno, Aapo Hyvärinen

The precision matrix of the linear components is assumed to be randomly generated by a higher-order process and explicitly parametrized by a parameter matrix.

Cannot find the paper you are looking for? You can Submit a new open access paper.