Search Results for author: Tomoya Sakai

Found 11 papers, 2 papers with code

Unsupervised Deep Learning by Injecting Low-Rank and Sparse Priors

no code implementations21 Jun 2021 Tomoya Sakai

What if deep neural networks can learn from sparsity-inducing priors?

Inductive Bias

Predictive Optimization with Zero-Shot Domain Adaptation

no code implementations15 Jan 2021 Tomoya Sakai, Naoto Ohsaka

The task is regarded as predictive optimization, but existing predictive optimization methods have not been extended to handling multiple domains.

Domain Adaptation

Regret Minimization for Causal Inference on Large Treatment Space

no code implementations10 Jun 2020 Akira Tanimoto, Tomoya Sakai, Takashi Takenouchi, Hisashi Kashima

Predicting which action (treatment) will lead to a better outcome is a central task in decision support systems.

Causal Inference counterfactual +1

Do We Need Zero Training Loss After Achieving Zero Training Error?

1 code implementation ICML 2020 Takashi Ishida, Ikko Yamane, Tomoya Sakai, Gang Niu, Masashi Sugiyama

We experimentally show that flooding improves performance and, as a byproduct, induces a double descent curve of the test loss.


Robust modal regression with direct log-density derivative estimation

no code implementations18 Oct 2019 Hiroaki Sasaki, Tomoya Sakai, Takafumi Kanamori

In order to apply a gradient method for the maximization, the fundamental challenge is accurate approximation of the gradient of MRR, not MRR itself.

Density Estimation regression

Binary Matrix Completion Using Unobserved Entries

no code implementations13 Mar 2018 Masayoshi Hayashi, Tomoya Sakai, Masashi Sugiyama

In this paper, motivated by a semi-supervised classification method recently proposed by Sakai et al. (2017), we develop a method for the BMC problem which can use all of positive, negative, and unobserved entries, by combining the risks of Davenport et al. (2014) and Hsieh et al. (2015).

General Classification Matrix Completion +1

Information-Theoretic Representation Learning for Positive-Unlabeled Classification

no code implementations15 Oct 2017 Tomoya Sakai, Gang Niu, Masashi Sugiyama

Recent advances in weakly supervised classification allow us to train a classifier only from positive and unlabeled (PU) data.

Classification Dimensionality Reduction +3

Semi-Supervised AUC Optimization based on Positive-Unlabeled Learning

no code implementations4 May 2017 Tomoya Sakai, Gang Niu, Masashi Sugiyama

Maximizing the area under the receiver operating characteristic curve (AUC) is a standard approach to imbalanced classification.

imbalanced classification

Convex Formulation of Multiple Instance Learning from Positive and Unlabeled Bags

1 code implementation22 Apr 2017 Han Bao, Tomoya Sakai, Issei Sato, Masashi Sugiyama

Multiple instance learning (MIL) is a variation of traditional supervised learning problems where data (referred to as bags) are composed of sub-elements (referred to as instances) and only bag labels are available.

Content-Based Image Retrieval Multiple Instance Learning +2

Semi-Supervised Classification Based on Classification from Positive and Unlabeled Data

no code implementations ICML 2017 Tomoya Sakai, Marthinus Christoffel du Plessis, Gang Niu, Masashi Sugiyama

Most of the semi-supervised classification methods developed so far use unlabeled data for regularization purposes under particular distributional assumptions such as the cluster assumption.

Classification General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.