On the errors-in-variables problem for time series
The usual assumption in the classical errors-in-variables problem of independent measurement errors cannot necessarily be maintained when the data are time series; errors may be strongly serially correlated, possibly containing seasonal effects and trends. When it is possible to identify frequency bands over which the signal-to-noise ratio is large, an approximate solution to the errors-in-variables problem is to omit the remaining frequencies from a time series regression. We draw attention to the danger of "leakage" from the omitted frequencies, and show that the consequent bias can be reduced by means of tapering.
If you experience problems downloading a file, check if you have the proper application to view it first. In case of further problems read the IDEAS help page. Note that these files are not on the IDEAS site. Please be patient as the files may be large.
As the access to this document is restricted, you may want to look for a different version under "Related research" (further below) or search for a different version of it.
Volume (Year): 19 (1986)
Issue (Month): 2 (August)
|Contact details of provider:|| Web page: http://www.elsevier.com/wps/find/journaldescription.cws_home/622892/description#description|
|Order Information:|| Postal: http://www.elsevier.com/wps/find/supportfaq.cws_home/regional|