On the errors-in-variables problem for time series
The usual assumption in the classical errors-in-variables problem of independent measurement errors cannot necessarily be maintained when the data are time series; errors may be strongly serially correlated, possibly containing seasonal effects and trends. When it is possible to identify frequency bands over which the signal-to-noise ratio is large, an approximate solution to the errors-in-variables problem is to omit the remaining frequencies from a time series regression. We draw attention to the danger of "leakage" from the omitted frequencies, and show that the consequent bias can be reduced by means of tapering.
Volume (Year): 19 (1986)
Issue (Month): 2 (August)
|Contact details of provider:|| Web page: http://www.elsevier.com/wps/find/journaldescription.cws_home/622892/description#description|
|Order Information:|| Postal: http://www.elsevier.com/wps/find/supportfaq.cws_home/regional|