no code implementations • 21 Jun 2021 • Tomoya Sakai
What if deep neural networks can learn from sparsity-inducing priors?
no code implementations • 15 Jan 2021 • Tomoya Sakai, Naoto Ohsaka
The task is regarded as predictive optimization, but existing predictive optimization methods have not been extended to handling multiple domains.
no code implementations • 10 Jun 2020 • Akira Tanimoto, Tomoya Sakai, Takashi Takenouchi, Hisashi Kashima
Predicting which action (treatment) will lead to a better outcome is a central task in decision support systems.
1 code implementation • ICML 2020 • Takashi Ishida, Ikko Yamane, Tomoya Sakai, Gang Niu, Masashi Sugiyama
We experimentally show that flooding improves performance and, as a byproduct, induces a double descent curve of the test loss.
no code implementations • 18 Oct 2019 • Hiroaki Sasaki, Tomoya Sakai, Takafumi Kanamori
In order to apply a gradient method for the maximization, the fundamental challenge is accurate approximation of the gradient of MRR, not MRR itself.
no code implementations • 13 Mar 2018 • Masayoshi Hayashi, Tomoya Sakai, Masashi Sugiyama
In this paper, motivated by a semi-supervised classification method recently proposed by Sakai et al. (2017), we develop a method for the BMC problem which can use all of positive, negative, and unobserved entries, by combining the risks of Davenport et al. (2014) and Hsieh et al. (2015).
no code implementations • 15 Oct 2017 • Tomoya Sakai, Gang Niu, Masashi Sugiyama
Recent advances in weakly supervised classification allow us to train a classifier only from positive and unlabeled (PU) data.
no code implementations • 4 May 2017 • Tomoya Sakai, Gang Niu, Masashi Sugiyama
Maximizing the area under the receiver operating characteristic curve (AUC) is a standard approach to imbalanced classification.
1 code implementation • 22 Apr 2017 • Han Bao, Tomoya Sakai, Issei Sato, Masashi Sugiyama
Multiple instance learning (MIL) is a variation of traditional supervised learning problems where data (referred to as bags) are composed of sub-elements (referred to as instances) and only bag labels are available.
no code implementations • ICML 2017 • Tomoya Sakai, Marthinus Christoffel du Plessis, Gang Niu, Masashi Sugiyama
Most of the semi-supervised classification methods developed so far use unlabeled data for regularization purposes under particular distributional assumptions such as the cluster assumption.
no code implementations • NeurIPS 2016 • Gang Niu, Marthinus Christoffel du Plessis, Tomoya Sakai, Yao Ma, Masashi Sugiyama
In PU learning, a binary classifier is trained from positive (P) and unlabeled (U) data without negative (N) data.