Combining Different Procedures for Adaptive Regression
Given any countable collection of regression procedures (e.g., kernel, spline, wavelet, local polynomial, neural nets, etc.), we show that a single adaptive procedure can be constructed to share their advantages to a great extent in terms of global squared L2 risk. The combined procedure basically pays a price only of order 1/n for adaptation over the collection. An interesting consequence is that for a countable collection of classes of regression functions (possibly of completely different characteristics), a minimax-rate adaptive estimator can be constructed such that it automatically converges at the right rate for each of the classes being considered. Â A demonstration is given for high-dimensional regression, for which case, to overcome the well-known curse of dimensionality in accuracy, it is advantageous to seek different ways of characterizing a high-dimensional function (e.g., using neural nets or additive modelings) to reduce the influence of input dimension in the traditional theory of approximation (e.g., in terms of series expansion). However, in general, it is difficult to assess which characterization works well for the unknown regression function. Thus adaptation over different modelings is desired. For example, we show by combining various regression procedures that a single estimator can be constructed to be minimax-rate adaptive over Besov classes of unknown smoothness and interaction order, to converge at rate o(n-1/2) when the regression function has a neural net representation, and at the same time to be consistent over all bounded regression functions.
If you experience problems downloading a file, check if you have the proper application to view it first. In case of further problems read the IDEAS help page. Note that these files are not on the IDEAS site. Please be patient as the files may be large.
As the access to this document is restricted, you may want to look for a different version under "Related research" (further below) or search for a different version of it.
Volume (Year): 74 (2000)
Issue (Month): 1 (July)
|Contact details of provider:|| Web page: http://www.elsevier.com/wps/find/journaldescription.cws_home/622892/description#description|
|Order Information:|| Postal: http://www.elsevier.com/wps/find/supportfaq.cws_home/regional|
References listed on IDEAS
Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:
- Gábor Lugosi & Andrew B. Nobel, 1998. "Adaptive model selection using empirical complexities," Economics Working Papers 323, Department of Economics and Business, Universitat Pompeu Fabra.
When requesting a correction, please mention this item's handle: RePEc:eee:jmvana:v:74:y:2000:i:1:p:135-161. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Dana Niculescu)
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If references are entirely missing, you can add them using this form.
If the full references list an item that is present in RePEc, but the system did not link to it, you can help with this form.
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your profile, as there may be some citations waiting for confirmation.
Please note that corrections may take a couple of weeks to filter through the various RePEc services.