Extremely Randomized Neural Networks for constructing prediction intervals.
T. Mancini, H.-F. Calvo-Pardo and J. Olmo (2021). Extremely Randomized Neural Networks for constructing prediction intervals. Neural Networks 144, 113-128.
The aim of this paper is to propose a novel prediction model based on an ensemble of deep
neural networks. To do this, we adapt the extremely randomized trees method originally
developed for random forests. The extra-randomness introduced in the ensemble reduces
the variance of the predictions and yields gains in out-of-sample accuracy. As a byproduct,
we are able to compute the uncertainty about our model predictions and construct interval
forecasts. It also allows overcoming some of the limitations associated with bootstrapbased
algorithms by not performing data resampling and thus, by ensuring the suitability
of the methodology in low and mid-dimensional settings, or when the i:i:d: assumption does
not hold. An extensive Monte Carlo simulation exercise shows the good performance of
this novel prediction method in terms of mean square prediction error and the accuracy of
the prediction intervals in terms of out-of-sample prediction interval coverage probabilities.
The advanced approach delivers better out-of-sample accuracy in experimental settings,
improving upon state-of-the-art methods like MC dropout and bootstrap procedures.
The aim of this paper is to propose a novel prediction model based on an ensemble of deep
neural networks. To do this, we adapt the extremely randomized trees method originally
developed for random forests. The extra-randomness introduced in the ensemble reduces
the variance of the predictions and yields gains in out-of-sample accuracy. As a byproduct,
we are able to compute the uncertainty about our model predictions and construct interval
forecasts. It also allows overcoming some of the limitations associated with bootstrapbased
algorithms by not performing data resampling and thus, by ensuring the suitability
of the methodology in low and mid-dimensional settings, or when the i:i:d: assumption does
not hold. An extensive Monte Carlo simulation exercise shows the good performance of
this novel prediction method in terms of mean square prediction error and the accuracy of
the prediction intervals in terms of out-of-sample prediction interval coverage probabilities.
The advanced approach delivers better out-of-sample accuracy in experimental settings,
improving upon state-of-the-art methods like MC dropout and bootstrap procedures.