An empirical analysis of neural network memory structures for basin water quality forecasting
This research investigates the cumulative multi-period forecast accuracy of a diverse set of potential forecasting models for basin water quality management. The models are characterized by their short-term (memory by delay or memory by feedback) and long-term (linear or nonlinear) memory structures. The experiments are conducted as a series of forecast cycles, with a rolling origin of a constant fit size. The models are recalibrated with each cycle, and out-of-sample forecasts are generated for a five-period forecast horizon. The results confirm that the JENN and GMNN neural network models are generally more accurate than competitors for cumulative multi-period basin water quality prediction. For example, the JENN and GMNN models reduce the cumulative five-period forecast errors by as much as 50%, relative to exponential smoothing and ARIMA models. These findings are significant in view of the increasing social and economic consequences of basin water quality management, and have the potential for extention to other scientific, medical, and business applications where multi-period predictions of nonlinear time series are critical.
If you experience problems downloading a file, check if you have the proper application to view it first. In case of further problems read the IDEAS help page. Note that these files are not on the IDEAS site. Please be patient as the files may be large.
As the access to this document is restricted, you may want to look for a different version under "Related research" (further below) or search for a different version of it.
References listed on IDEAS
Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:
- Armstrong, J. Scott & Collopy, Fred, 1992. "Error measures for generalizing about forecasting methods: Empirical comparisons," International Journal of Forecasting, Elsevier, vol. 8(1), pages 69-80, June.
- Tashman, Leonard J., 2000. "Out-of-sample tests of forecasting accuracy: an analysis and review," International Journal of Forecasting, Elsevier, vol. 16(4), pages 437-450.
- Armstrong, J. Scott, 2007. "Significance tests harm progress in forecasting," International Journal of Forecasting, Elsevier, vol. 23(2), pages 321-327.
When requesting a correction, please mention this item's handle: RePEc:eee:intfor:v:27:y::i:3:p:777-803. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Zhang, Lei)
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If references are entirely missing, you can add them using this form.
If the full references list an item that is present in RePEc, but the system did not link to it, you can help with this form.
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your profile, as there may be some citations waiting for confirmation.
Please note that corrections may take a couple of weeks to filter through the various RePEc services.