On the errors-in-variables problem for time series
The usual assumption in the classical errors-in-variables problem of independent measurement errors cannot necessarily be maintained when the data are time series; errors may be strongly serially correlated, possibly containing seasonal effects and trends. When it is possible to identify frequency bands over which the signal-to-noise ratio is large, an approximate solution to the errors-in-variables problem is to omit the remaining frequencies from a time series regression. We draw attention to the danger of "leakage" from the omitted frequencies, and show that the consequent bias can be reduced by means of tapering.
Volume (Year): 19 (1986)
Issue (Month): 2 (August)
|Contact details of provider:|| Web page: http://www.elsevier.com/wps/find/journaldescription.cws_home/622892/description#description|
|Order Information:|| Postal: http://www.elsevier.com/wps/find/supportfaq.cws_home/regional|
When requesting a correction, please mention this item's handle: RePEc:eee:jmvana:v:19:y:1986:i:2:p:240-250. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Zhang, Lei)
If references are entirely missing, you can add them using this form.