no code implementations • 6 Mar 2023 • Yuhui Zhang, Yuichiro Wada, Hiroki Waida, Kaito Goto, Yusaku Hino, Takafumi Kanamori
To address the problem, we propose a constraint utilizing symmetric InfoNCE, which helps an objective of deep clustering method in the scenario train the model so as to be efficient for not only non-complex topology but also complex topology datasets.
1 code implementation • 9 Jun 2021 • Léo Andéol, Yusei Kawakami, Yuichiro Wada, Takafumi Kanamori, Klaus-Robert Müller, Grégoire Montavon
Domain shifts in the training data are common in practical applications of machine learning, they occur for instance when the data is coming from different sources.
no code implementations • 1 Jan 2021 • Yuki Mae, Wataru Kumagai, Takafumi Kanamori
We report the computational efficiency and statistical reliability of our method in numerical experiments of the language modeling using RNNs, and the out-of-distribution detection with DNNs.
no code implementations • 18 Oct 2019 • Hiroaki Sasaki, Tomoya Sakai, Takafumi Kanamori
In order to apply a gradient method for the maximization, the fundamental challenge is accurate approximation of the gradient of MRR, not MRR itself.
2 code implementations • 9 Oct 2019 • Song Liu, Takafumi Kanamori, Daniel J. Williams
In this paper, we study parameter estimation for truncated probability densities using SM.
no code implementations • 23 Jan 2019 • Masatoshi Uehara, Takafumi Kanamori, Takashi Takenouchi, Takeru Matsuda
The parameter estimation of unnormalized models is a challenging problem.
no code implementations • 2 Jun 2018 • Kota Matsui, Wataru Kumagai, Kenta Kanamori, Mitsuaki Nishikimi, Takafumi Kanamori
In this paper, we propose a variable selection method for general nonparametric kernel-based estimation.
1 code implementation • NeurIPS 2019 • Song Liu, Takafumi Kanamori, Wittawat Jitkrittum, Yu Chen
For example, the asymptotic variance of MLE solution attains equality of the asymptotic Cram{\'e}r-Rao lower bound (efficiency bound), which is the minimum possible variance for an unbiased estimator.
no code implementations • 6 Jul 2017 • Hiroaki Sasaki, Takafumi Kanamori, Aapo Hyvärinen, Gang Niu, Masashi Sugiyama
Based on the proposed estimator, novel methods both for mode-seeking clustering and density ridge estimation are developed, and the respective convergence rates to the mode and ridge of the underlying density are also established.
no code implementations • NeurIPS 2015 • Takashi Takenouchi, Takafumi Kanamori
In this paper, we propose a novel parameter estimator for probabilistic models on discrete space.
no code implementations • 13 Sep 2014 • Kota Matsui, Wataru Kumagai, Takafumi Kanamori
Our algorithm consists of two steps; one is the direction estimate step and the other is the search step.
no code implementations • 3 Sep 2014 • Takafumi Kanamori, Shuhei Fujiwara, Akiko Takeda
For learning parameters such as the regularization parameter in our algorithm, we derive a simple formula that guarantees the robustness of the classifier.
no code implementations • 11 May 2013 • Takafumi Kanamori, Hironori Fujisawa
By using the equivariant estimators under the affine transformation, one can obtain estimators that do no essentially depend on the choice of the system of units in the measurement.
no code implementations • NeurIPS 2012 • Masashi Sugiyama, Takafumi Kanamori, Taiji Suzuki, Marthinus D. Plessis, Song Liu, Ichiro Takeuchi
A naive approach is a two-step procedure of first estimating two densities separately and then computing their difference.
no code implementations • NeurIPS 2011 • Makoto Yamada, Taiji Suzuki, Takafumi Kanamori, Hirotaka Hachiya, Masashi Sugiyama
Divergence estimators based on direct approximation of density-ratios without going through separate approximation of numerator and denominator densities have been successfully applied to machine learning tasks that involve distribution comparison such as outlier detection, transfer learning, and two-sample homogeneity test.
1 code implementation • 15 Dec 2009 • Takafumi Kanamori, Taiji Suzuki, Masashi Sugiyama
We show that the kernel least-squares method has a smaller condition number than a version of kernel mean matching and other M-estimators, implying that the kernel least-squares method has preferable numerical properties.
no code implementations • NeurIPS 2008 • Takafumi Kanamori, Shohei Hido, Masashi Sugiyama
We address the problem of estimating the ratio of two probability density functions (a. k. a.~the importance).