Bayesian optimization (BO) is an approach to globally optimizing black-box objective functions that are expensive to evaluate.
In a recent contribution we showed that SkewGP and probit likelihood are conjugate, which allows us to compute the exact posterior for non-parametric binary classification and preference learning.
Automatic forecasting is the task of receiving a time series and returning a forecast for the next time steps without any human intervention.
In this paper, we prove that the true posterior distribution of the preference function is a Skew Gaussian Process (SkewGP), with highly skewed pairwise marginals and, thus, show that Laplace's method usually provides a very poor approximation.
In spite of their success, GPs have limited use in some applications, for example, in some cases a symmetric distribution with respect to its mean is an unreasonable model.
Gaussian Processes (GPs) are powerful kernelized methods for non-parameteric regression used in many applications.
Usually one compares the accuracy of two competing classifiers via null hypothesis significance tests (nhst).
The machine learning community adopted the use of null hypothesis significance testing (NHST) in order to ensure the statistical validity of results.
In other words, the outcome of the comparison between algorithms A and B depends also on the performance of the other algorithms included in the original experiment.
In this paper, we study inferential complexity under the concepts of epistemic irrelevance and strong independence.