Prediction Intervals
120 papers with code • 0 benchmarks • 2 datasets
A prediction interval is an estimate of an interval in which a future observation will fall, with a certain probability, given what has already been observed. Prediction intervals are often used in regression analysis.
Benchmarks
These leaderboards are used to track progress in Prediction Intervals
Libraries
Use these libraries to find Prediction Intervals models and implementationsMost implemented papers
Distribution-Free Predictive Inference For Regression
In the spirit of reproducibility, all of our empirical results can also be easily (re)generated using this package.
Conformalized Quantile Regression
Conformal prediction is a technique for constructing prediction intervals that attain valid coverage in finite samples, without making distributional assumptions.
Distributional Gradient Boosting Machines
We present a unified probabilistic gradient boosting framework for regression tasks that models and predicts the entire conditional distribution of a univariate response variable as a function of covariates.
Conformal Inference for Online Prediction with Arbitrary Distribution Shifts
We consider the problem of forming prediction sets in an online setting where the distribution generating the data is allowed to vary over time.
Adaptive Skip Intervals: Temporal Abstraction for Recurrent Dynamical Models
We introduce a method which enables a recurrent dynamics model to be temporally abstract.
XGBoostLSS -- An extension of XGBoost to probabilistic forecasting
We propose a new framework of XGBoost that predicts the entire conditional distribution of a univariate response variable.
Conformal prediction interval for dynamic time-series
We develop a method to construct distribution-free prediction intervals for dynamic time-series, called \Verb|EnbPI| that wraps around any bootstrap ensemble estimator to construct sequential prediction intervals.
Conformalized Survival Analysis
Existing survival analysis techniques heavily rely on strong modelling assumptions and are, therefore, prone to model misspecification errors.
Monitoring Model Deterioration with Explainable Uncertainty Estimation via Non-parametric Bootstrap
In this work, we use non-parametric bootstrapped uncertainty estimates and SHAP values to provide explainable uncertainty estimation as a technique that aims to monitor the deterioration of machine learning models in deployment environments, as well as determine the source of model deterioration when target labels are not available.
Image-to-Image Regression with Distribution-Free Uncertainty Quantification and Applications in Imaging
Image-to-image regression is an important learning task, used frequently in biological imaging.