Evaluating Neural Network Predictors by Bootstrapping
We present a new method, inspired by the bootstrap, whose goal it is to determine the quality and reliability of a neural network predictor. Our method leads to more robust forecasting along with a large amount of statistical information on forecast performance that we exploit. We exhibit the method in the context of multi-variate time series prediction on financial data from the New York Stock Exchange. It turns out that the variation due to different resamplings (i.e., splits between training, cross-validation, and test sets) is significantly larger than the variation due to different network conditions (such as architecture and initial weights). Furthermore, this method allows us to forecast a probability distribution, as opposed to the traditional case of just a single value at each time step. We demonstrate this on a strictly held-out test set that includes the 1987 stock market crash. We also compare the performance of the class of neural networks to identically bootstrapped linear models.
Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:
- Gallant, A Ronald & Rossi, Peter E & Tauchen, George, 1993. "Nonlinear Dynamic Structures," Econometrica, Econometric Society, vol. 61(4), pages 871-907, July.
- Bollerslev, Tim & Chou, Ray Y. & Kroner, Kenneth F., 1992. "ARCH modeling in finance : A review of the theory and empirical evidence," Journal of Econometrics, Elsevier, vol. 52(1-2), pages 5-59.
When requesting a correction, please mention this item's handle: RePEc:wpa:wuwpfi:9411002. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (EconWPA)
If references are entirely missing, you can add them using this form.