no code implementations • 28 Oct 2024 • Takafumi Kanamori, Kodai Yokoyama, Takayuki Kawashima
However, such an assumption is often violated in practice.
no code implementations • 28 Oct 2024 • Yoshitaka Koike, Takumi Nakagawa, Hiroki Waida, Takafumi Kanamori
We investigate Diffusion-GAN and reveal that data scaling is a key component for stable learning and high-quality data generation.
no code implementations • 26 Jul 2024 • Hiroo Irobe, Wataru Aoki, Kimihiro Yamazaki, Yuhui Zhang, Takumi Nakagawa, Hiroki Waida, Yuichiro Wada, Takafumi Kanamori
Advancing defensive mechanisms against adversarial attacks in generative models is a critical research topic in machine learning.
1 code implementation • 21 Sep 2023 • Kei Ishikawa, Niao He, Takafumi Kanamori
We study policy evaluation of offline contextual bandits subject to unobserved confounders.
no code implementations • 19 Apr 2023 • Takumi Nakagawa, Yutaro Sanada, Hiroki Waida, Yuhui Zhang, Yuichiro Wada, Kōsaku Takanashi, Tomonori Yamada, Takafumi Kanamori
To this end, inspired by recent works on denoising and the success of the cosine-similarity-based objective functions in representation learning, we propose the denoising Cosine-Similarity (dCS) loss.
no code implementations • 1 Apr 2023 • Hiroki Waida, Yuichiro Wada, Léo Andéol, Takumi Nakagawa, Yuhui Zhang, Takafumi Kanamori
We first prove that the formulation characterizes the structure of representations learned with the kernel-based contrastive learning framework.
no code implementations • 6 Mar 2023 • Yuhui Zhang, Yuichiro Wada, Hiroki Waida, Kaito Goto, Yusaku Hino, Takafumi Kanamori
To address the problem, we propose a constraint utilizing symmetric InfoNCE, which helps an objective of deep clustering method in the scenario train the model so as to be efficient for not only non-complex topology but also complex topology datasets.
1 code implementation • 9 Jun 2021 • Léo Andeol, Yusei Kawakami, Yuichiro Wada, Takafumi Kanamori, Klaus-Robert Müller, Grégoire Montavon
However, common ML losses do not give strong guarantees on how consistently the ML model performs for different domains, in particular, whether the model performs well on a domain at the expense of its performance on another domain.
no code implementations • 1 Jan 2021 • Yuki Mae, Wataru Kumagai, Takafumi Kanamori
We report the computational efficiency and statistical reliability of our method in numerical experiments of the language modeling using RNNs, and the out-of-distribution detection with DNNs.
no code implementations • 18 Oct 2019 • Hiroaki Sasaki, Tomoya Sakai, Takafumi Kanamori
In order to apply a gradient method for the maximization, the fundamental challenge is accurate approximation of the gradient of MRR, not MRR itself.
2 code implementations • 9 Oct 2019 • Song Liu, Takafumi Kanamori, Daniel J. Williams
In this paper, we study parameter estimation for truncated probability densities using SM.
no code implementations • 23 Jan 2019 • Masatoshi Uehara, Takafumi Kanamori, Takashi Takenouchi, Takeru Matsuda
The parameter estimation of unnormalized models is a challenging problem.
no code implementations • 2 Jun 2018 • Kota Matsui, Wataru Kumagai, Kenta Kanamori, Mitsuaki Nishikimi, Takafumi Kanamori
In this paper, we propose a variable selection method for general nonparametric kernel-based estimation.
1 code implementation • NeurIPS 2019 • Song Liu, Takafumi Kanamori, Wittawat Jitkrittum, Yu Chen
For example, the asymptotic variance of MLE solution attains equality of the asymptotic Cram{\'e}r-Rao lower bound (efficiency bound), which is the minimum possible variance for an unbiased estimator.
no code implementations • 6 Jul 2017 • Hiroaki Sasaki, Takafumi Kanamori, Aapo Hyvärinen, Gang Niu, Masashi Sugiyama
Based on the proposed estimator, novel methods both for mode-seeking clustering and density ridge estimation are developed, and the respective convergence rates to the mode and ridge of the underlying density are also established.
no code implementations • NeurIPS 2015 • Takashi Takenouchi, Takafumi Kanamori
In this paper, we propose a novel parameter estimator for probabilistic models on discrete space.
no code implementations • 13 Sep 2014 • Kota Matsui, Wataru Kumagai, Takafumi Kanamori
Our algorithm consists of two steps; one is the direction estimate step and the other is the search step.
no code implementations • 3 Sep 2014 • Takafumi Kanamori, Shuhei Fujiwara, Akiko Takeda
For learning parameters such as the regularization parameter in our algorithm, we derive a simple formula that guarantees the robustness of the classifier.
no code implementations • 11 May 2013 • Takafumi Kanamori, Hironori Fujisawa
By using the equivariant estimators under the affine transformation, one can obtain estimators that do no essentially depend on the choice of the system of units in the measurement.
no code implementations • NeurIPS 2012 • Masashi Sugiyama, Takafumi Kanamori, Taiji Suzuki, Marthinus D. Plessis, Song Liu, Ichiro Takeuchi
A naive approach is a two-step procedure of first estimating two densities separately and then computing their difference.
no code implementations • NeurIPS 2011 • Makoto Yamada, Taiji Suzuki, Takafumi Kanamori, Hirotaka Hachiya, Masashi Sugiyama
Divergence estimators based on direct approximation of density-ratios without going through separate approximation of numerator and denominator densities have been successfully applied to machine learning tasks that involve distribution comparison such as outlier detection, transfer learning, and two-sample homogeneity test.
1 code implementation • 15 Dec 2009 • Takafumi Kanamori, Taiji Suzuki, Masashi Sugiyama
We show that the kernel least-squares method has a smaller condition number than a version of kernel mean matching and other M-estimators, implying that the kernel least-squares method has preferable numerical properties.
no code implementations • NeurIPS 2008 • Takafumi Kanamori, Shohei Hido, Masashi Sugiyama
We address the problem of estimating the ratio of two probability density functions (a. k. a.~the importance).