A multiple testing procedure for neural network model selection
One of the most critical issues when using neural networks is how to select appropriate network architectures for the problem at hand. Practitioners usually refer to information criteria which might lead to over-parameterized models with heavy consequence on overfitting and poor ex-post forecast accuracy. Moreover, since model selection criteria depend on sample information, their actual values are subject to statistical variations. So, to compare multiple models in terms of their out of sample predictive ability, a test procedure is needed. But, in such context there is always the possibility that any satisfactory results obtained may simply be due to chance rather than any merit inherent in the model yielding to the result. The problem can be particularly serious when using neural network models which are basically atheoretical. In this paper we propose a strategy for neural network model selection which is based on a sequence of tests and, to avoid the data snooping problem, familywise error rate is controlled by a proper technique. The procedure requires the implementation of resampling techniques in order to obtain valid asymptotic critical values for the test. Some simulations results and applications to real data are discussed.
To our knowledge, this item is not available for
download. To find whether it is available, there are three
1. Check below under "Related research" whether another version of this item is available online.
2. Check on the provider's web page whether it is in fact available.
3. Perform a search for a similarly titled item that would be available.
When requesting a correction, please mention this item's handle: RePEc:sce:scecfa:497. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Christopher F. Baum)
If references are entirely missing, you can add them using this form.