Prediction Intervals
94 papers with code • 0 benchmarks • 2 datasets
A prediction interval is an estimate of an interval in which a future observation will fall, with a certain probability, given what has already been observed. Prediction intervals are often used in regression analysis.
Benchmarks
These leaderboards are used to track progress in Prediction Intervals
Libraries
Use these libraries to find Prediction Intervals models and implementationsMost implemented papers
High-Quality Prediction Intervals for Deep Learning: A Distribution-Free, Ensembled Approach
This paper considers the generation of prediction intervals (PIs) by neural networks for quantifying uncertainty in regression tasks.
Boosting Random Forests to Reduce Bias; One-Step Boosted Forest and its Variance Estimate
In this paper we propose using the principle of boosting to reduce the bias of a random forest prediction in the regression setting.
Calibrated Prediction Intervals for Neural Network Regressors
Despite their obvious aforementioned advantage in relation to accuracy, contemporary neural networks can, generally, be regarded as poorly calibrated and as such do not produce reliable output probability estimates.
Single-Model Uncertainties for Deep Learning
To estimate epistemic uncertainty, we propose Orthonormal Certificates (OCs), a collection of diverse non-constant functions that map all training samples to zero.
HDI-Forest: Highest Density Interval Regression Forest
By seeking the narrowest prediction intervals (PIs) that satisfy the specified coverage probability requirements, the recently proposed quality-based PI learning principle can extract high-quality PIs that better summarize the predictive certainty in regression tasks, and has been widely applied to solve many practical problems.
Simultaneous Prediction Intervals for Patient-Specific Survival Curves
In this paper, we demonstrate that an existing method for estimating simultaneous prediction intervals from samples can easily be adapted for patient-specific survival curve analysis and yields accurate results.
XGBoostLSS -- An extension of XGBoost to probabilistic forecasting
We propose a new framework of XGBoost that predicts the entire conditional distribution of a univariate response variable.
With Malice Towards None: Assessing Uncertainty via Equalized Coverage
An important factor to guarantee a fair use of data-driven recommendation systems is that we should be able to communicate their uncertainty to decision makers.
Building Calibrated Deep Models via Uncertainty Matching with Auxiliary Interval Predictors
With rapid adoption of deep learning in critical applications, the question of when and how much to trust these models often arises, which drives the need to quantify the inherent uncertainties.
A comparison of some conformal quantile regression methods
We compare two recently proposed methods that combine ideas from conformal inference and quantile regression to produce locally adaptive and marginally valid prediction intervals under sample exchangeability (Romano et al., 2019; Kivaranovic et al., 2019).