To difference or not to difference: a Monte Carlo investigation of inference in vector autoregression models
It is often unclear whether time series displaying substantial persistence should be modelled as a vector autoregression in levels (perhaps with a trend term) or in differences. The impact of this decision on inference is examined here using Monte Carlo simulation. In particular, the size and power of variable inclusion (Granger causality) tests and the coverage of impulse response function confidence intervals are examined for simulated vector autoregression models using a variety of estimation techniques. We conclude that testing should be done using differenced regressors, but that overdifferencing a model yields poor impulse response function confidence interval coverage; modelling in Hodrick-Prescott filtered levels yields poor results in any case. We find that the lag-augmented vector autoregression method suggested by Toda and Yamamoto (1995) – which models the level of the series but allows for variable inclusion testing on changes in the series – performs well for both Granger causality testing and impulse response function estimation.
Volume (Year): 1 (2009)
Issue (Month): 3 ()
|Contact details of provider:|| Web page: http://www.inderscience.com/browse/index.php?journalID=282|
When requesting a correction, please mention this item's handle: RePEc:ids:injdan:v:1:y:2009:i:3:p:242-274. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Darren Simpson)
If references are entirely missing, you can add them using this form.