67 papers with code • 0 benchmarks • 2 datasets
A prediction interval is an estimate of an interval in which a future observation will fall, with a certain probability, given what has already been observed. Prediction intervals are often used in regression analysis.
These leaderboards are used to track progress in Prediction Intervals
In this paper we propose using the principle of boosting to reduce the bias of a random forest prediction in the regression setting.
By seeking the narrowest prediction intervals (PIs) that satisfy the specified coverage probability requirements, the recently proposed quality-based PI learning principle can extract high-quality PIs that better summarize the predictive certainty in regression tasks, and has been widely applied to solve many practical problems.
We introduce a unified framework for random forest prediction error estimation based on a novel estimator of the conditional prediction error distribution function.
We develop a method to construct distribution-free prediction intervals for dynamic time-series, called \Verb|EnbPI| that wraps around any bootstrap ensemble estimator to construct sequential prediction intervals.
The set of methods implemented in the package includes a new method to build prediction intervals with boosted forests (PIBF) and 15 method variations to produce prediction intervals with random forests, as proposed by Roy and Larocque (2020).