no code implementations • 11 Sep 2018 • Seiichi Kuroki, Nontawat Charoenphakdee, Han Bao, Junya Honda, Issei Sato, Masashi Sugiyama
A previously proposed discrepancy that does not use the source domain labels requires high computational cost to estimate and may lead to a loose generalization error bound in the target domain.
no code implementations • 19 Sep 2018 • Nontawat Charoenphakdee, Masashi Sugiyama
Based on the analysis of the Bayes optimal classifier, we show that given a test class prior, PU classification under class prior shift is equivalent to PU classification with asymmetric error.
no code implementations • 27 Jan 2019 • Yueh-Hua Wu, Nontawat Charoenphakdee, Han Bao, Voot Tangkaratt, Masashi Sugiyama
Imitation learning (IL) aims to learn an optimal policy from demonstrations.
1 code implementation • 27 Jan 2019 • Nontawat Charoenphakdee, Jongyeong Lee, Masashi Sugiyama
This paper aims to provide a better understanding of a symmetric loss.
1 code implementation • NeurIPS 2019 • Chenri Ni, Nontawat Charoenphakdee, Junya Honda, Masashi Sugiyama
First, we consider an approach based on simultaneous training of a classifier and a rejector, which achieves the state-of-the-art performance in the binary case.
no code implementations • 30 Jan 2019 • Jongyeong Lee, Nontawat Charoenphakdee, Seiichi Kuroki, Masashi Sugiyama
Appropriately evaluating the discrepancy between domains is essential for the success of unsupervised domain adaptation.
no code implementations • 31 Jan 2019 • Taira Tsuchiya, Nontawat Charoenphakdee, Issei Sato, Masashi Sugiyama
We further provide an estimation error bound to show that our risk estimator is consistent.
1 code implementation • 24 Jul 2019 • Zhenghang Cui, Nontawat Charoenphakdee, Issei Sato, Masashi Sugiyama
Although learning from triplet comparison data has been considered in many applications, an important fundamental question of whether we can learn a classifier only from triplet comparison data has remained unanswered.
no code implementations • IJCNLP 2019 • Nontawat Charoenphakdee, Jongyeong Lee, Yiping Jin, Dittaya Wanvarie, Masashi Sugiyama
We consider a document classification problem where document labels are absent but only relevant keywords of a target class and unlabeled documents are given.
1 code implementation • 10 Oct 2019 • Yivan Zhang, Nontawat Charoenphakdee, Masashi Sugiyama
Weakly-supervised learning is a paradigm for alleviating the scarcity of labeled data by leveraging lower-quality but larger-scale supervision signals.
no code implementations • 10 Mar 2020 • Hideaki Imamura, Nontawat Charoenphakdee, Futoshi Futami, Issei Sato, Junya Honda, Masashi Sugiyama
If the black-box function varies with time, then time-varying Bayesian optimization is a promising framework.
1 code implementation • NeurIPS 2020 • Yivan Zhang, Nontawat Charoenphakdee, Zhenguo Wu, Masashi Sugiyama
We study the problem of learning from aggregate observations where supervision signals are given to sets of instances instead of individual instances, while the goal is still to predict labels of unseen individuals.
1 code implementation • 20 Oct 2020 • Voot Tangkaratt, Nontawat Charoenphakdee, Masashi Sugiyama
Robust learning from noisy demonstrations is a practical but highly challenging problem in imitation learning.
no code implementations • 22 Oct 2020 • Nontawat Charoenphakdee, Zhenghang Cui, Yivan Zhang, Masashi Sugiyama
The goal of classification with rejection is to avoid risky misclassification in error-critical applications such as medical diagnosis and product inspection.
no code implementations • CVPR 2021 • Nontawat Charoenphakdee, Jayakorn Vongkulbhisal, Nuttapong Chairatanakul, Masashi Sugiyama
In this paper, we first prove that the focal loss is classification-calibrated, i. e., its minimizer surely yields the Bayes-optimal classifier and thus the use of the focal loss in classification can be theoretically justified.
no code implementations • 5 Jan 2021 • Nontawat Charoenphakdee, Jongyeong Lee, Masashi Sugiyama
When minimizing the empirical risk in binary classification, it is a common practice to replace the zero-one loss with a surrogate loss to make the learning objective feasible to optimize.
1 code implementation • Findings (EMNLP) 2021 • Nuttapong Chairatanakul, Noppayut Sriwatanasakdi, Nontawat Charoenphakdee, Xin Liu, Tsuyoshi Murata
To address this challenge, we propose dictionary-based heterogeneous graph neural network (DHGNet) that effectively handles the heterogeneity of DHG by two-step aggregations, which are word-level and language-level aggregations.
1 code implementation • 1 Feb 2022 • Takashi Ishida, Ikko Yamane, Nontawat Charoenphakdee, Gang Niu, Masashi Sugiyama
In contrast to others, our method is model-free and even instance-free.
1 code implementation • 31 Oct 2022 • Shuhan Zheng, Nontawat Charoenphakdee
To our knowledge, less attention has been paid to the investigation of the effectiveness of diffusion models for missing value imputation in tabular data.
no code implementations • 19 Jun 2023 • Kenta Oono, Nontawat Charoenphakdee, Kotatsu Bito, Zhengyan Gao, Yoshiaki Ota, Shoichiro Yamaguchi, Yohei Sugawara, Shin-ichi Maeda, Kunihiko Miyoshi, Yuki Saito, Koki Tsuda, Hiroshi Maruyama, Kohei Hayashi
In this paper, we propose Virtual Human Generative Model (VHGM), a machine learning model for estimating attributes about healthcare, lifestyles, and personalities.