35 papers with code • 0 benchmarks • 2 datasets
A prediction interval is an estimate of an interval in which a future observation will fall, with a certain probability, given what has already been observed. Prediction intervals are often used in regression analysis.
We use this data to develop predictions and corresponding prediction intervals for the short-term trajectory of COVID-19 cumulative death counts at the county-level in the United States up to two weeks ahead.
We propose a new framework of XGBoost that predicts the entire conditional distribution of a univariate response variable.
Accurate quantification of model uncertainty has long been recognized as a fundamental requirement for trusted AI.
With rapid adoption of deep learning in critical applications, the question of when and how much to trust these models often arises, which drives the need to quantify the inherent uncertainties.
We propose a new framework of CatBoost that predicts the entire conditional distribution of a univariate response variable.
In the spirit of reproducibility, all of our empirical results can also be easily (re)generated using this package.
An important factor to guarantee a fair use of data-driven recommendation systems is that we should be able to communicate their uncertainty to decision makers.
This paper considers the generation of prediction intervals (PIs) by neural networks for quantifying uncertainty in regression tasks.