The task is regarded as predictive optimization, but existing predictive optimization methods have not been extended to handling multiple domains.
Predicting which action (treatment) will lead to a better outcome is a central task in decision support systems.
We experimentally show that flooding improves performance and, as a byproduct, induces a double descent curve of the test loss.
In order to apply a gradient method for the maximization, the fundamental challenge is accurate approximation of the gradient of MRR, not MRR itself.
In this paper, motivated by a semi-supervised classification method recently proposed by Sakai et al. (2017), we develop a method for the BMC problem which can use all of positive, negative, and unobserved entries, by combining the risks of Davenport et al. (2014) and Hsieh et al. (2015).
Recent advances in weakly supervised classification allow us to train a classifier only from positive and unlabeled (PU) data.
Maximizing the area under the receiver operating characteristic curve (AUC) is a standard approach to imbalanced classification.
Multiple instance learning (MIL) is a variation of traditional supervised learning problems where data (referred to as bags) are composed of sub-elements (referred to as instances) and only bag labels are available.
Most of the semi-supervised classification methods developed so far use unlabeled data for regularization purposes under particular distributional assumptions such as the cluster assumption.
In PU learning, a binary classifier is trained from positive (P) and unlabeled (U) data without negative (N) data.