You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

no code implementations • 1 Feb 2022 • Takashi Ishida, Ikko Yamane, Nontawat Charoenphakdee, Gang Niu, Masashi Sugiyama

There is a fundamental limitation in the prediction performance that a machine learning model can achieve due to the inevitable uncertainty of the prediction target.

1 code implementation • Findings (EMNLP) 2021 • Nuttapong Chairatanakul, Noppayut Sriwatanasakdi, Nontawat Charoenphakdee, Xin Liu, Tsuyoshi Murata

To address this challenge, we propose dictionary-based heterogeneous graph neural network (DHGNet) that effectively handles the heterogeneity of DHG by two-step aggregations, which are word-level and language-level aggregations.

no code implementations • 5 Jan 2021 • Nontawat Charoenphakdee, Jongyeong Lee, Masashi Sugiyama

When minimizing the empirical risk in binary classification, it is a common practice to replace the zero-one loss with a surrogate loss to make the learning objective feasible to optimize.

no code implementations • CVPR 2021 • Nontawat Charoenphakdee, Jayakorn Vongkulbhisal, Nuttapong Chairatanakul, Masashi Sugiyama

In this paper, we first prove that the focal loss is classification-calibrated, i. e., its minimizer surely yields the Bayes-optimal classifier and thus the use of the focal loss in classification can be theoretically justified.

no code implementations • 22 Oct 2020 • Nontawat Charoenphakdee, Zhenghang Cui, Yivan Zhang, Masashi Sugiyama

The goal of classification with rejection is to avoid risky misclassification in error-critical applications such as medical diagnosis and product inspection.

1 code implementation • 20 Oct 2020 • Voot Tangkaratt, Nontawat Charoenphakdee, Masashi Sugiyama

Robust learning from noisy demonstrations is a practical but highly challenging problem in imitation learning.

1 code implementation • NeurIPS 2020 • Yivan Zhang, Nontawat Charoenphakdee, Zhenguo Wu, Masashi Sugiyama

We study the problem of learning from aggregate observations where supervision signals are given to sets of instances instead of individual instances, while the goal is still to predict labels of unseen individuals.

no code implementations • 10 Mar 2020 • Hideaki Imamura, Nontawat Charoenphakdee, Futoshi Futami, Issei Sato, Junya Honda, Masashi Sugiyama

If the black-box function varies with time, then time-varying Bayesian optimization is a promising framework.

no code implementations • IJCNLP 2019 • Nontawat Charoenphakdee, Jongyeong Lee, Yiping Jin, Dittaya Wanvarie, Masashi Sugiyama

We consider a document classification problem where document labels are absent but only relevant keywords of a target class and unlabeled documents are given.

1 code implementation • 10 Oct 2019 • Yivan Zhang, Nontawat Charoenphakdee, Masashi Sugiyama

Weakly-supervised learning is a paradigm for alleviating the scarcity of labeled data by leveraging lower-quality but larger-scale supervision signals.

1 code implementation • 24 Jul 2019 • Zhenghang Cui, Nontawat Charoenphakdee, Issei Sato, Masashi Sugiyama

Although learning from triplet comparison data has been considered in many applications, an important fundamental question of whether we can learn a classifier only from triplet comparison data has remained unanswered.

no code implementations • 31 Jan 2019 • Taira Tsuchiya, Nontawat Charoenphakdee, Issei Sato, Masashi Sugiyama

We further provide an estimation error bound to show that our risk estimator is consistent.

no code implementations • 30 Jan 2019 • Jongyeong Lee, Nontawat Charoenphakdee, Seiichi Kuroki, Masashi Sugiyama

Appropriately evaluating the discrepancy between domains is essential for the success of unsupervised domain adaptation.

1 code implementation • NeurIPS 2019 • Chenri Ni, Nontawat Charoenphakdee, Junya Honda, Masashi Sugiyama

First, we consider an approach based on simultaneous training of a classifier and a rejector, which achieves the state-of-the-art performance in the binary case.

no code implementations • 27 Jan 2019 • Yueh-Hua Wu, Nontawat Charoenphakdee, Han Bao, Voot Tangkaratt, Masashi Sugiyama

Imitation learning (IL) aims to learn an optimal policy from demonstrations.

1 code implementation • 27 Jan 2019 • Nontawat Charoenphakdee, Jongyeong Lee, Masashi Sugiyama

This paper aims to provide a better understanding of a symmetric loss.

no code implementations • 19 Sep 2018 • Nontawat Charoenphakdee, Masashi Sugiyama

Based on the analysis of the Bayes optimal classifier, we show that given a test class prior, PU classification under class prior shift is equivalent to PU classification with asymmetric error.

no code implementations • 11 Sep 2018 • Seiichi Kuroki, Nontawat Charoenphakdee, Han Bao, Junya Honda, Issei Sato, Masashi Sugiyama

A previously proposed discrepancy that does not use the source domain labels requires high computational cost to estimate and may lead to a loose generalization error bound in the target domain.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.