Small-Sample Confidence Intervals For Impulse Response Functions
Bias-corrected bootstrap confidence intervals explicitly account for the bias and skewness of the small-sample distribution of the impulse response estimator, while retaining asymptotic validity in stationary autoregressions. Monte Carlo simulations for a wide range of bivariate models show that in small samples bias-corrected bootstrap intervals tend to be more accurate than delta method intervals, standard bootstrap intervals, and Monte Carlo integration intervals. This conclusion holds for VAR models estimated in levels, as deviations from a linear time trend, and in first differences. It also holds for random walk processes and cointegrated processes estimated in levels. An empirical example shows that bias-corrected bootstrap intervals may imply economic interpretations of the data that are substantively different from standard methods. © 1998 by the President and Fellows of Harvard College and the Massachusetts Institute of Technology
Volume (Year): 80 (1998)
Issue (Month): 2 (May)
|Contact details of provider:|| Web page: http://mitpress.mit.edu/journals/|
|Order Information:||Web: http://mitpress.mit.edu/journal-home.tcl?issn=00346535|
When requesting a correction, please mention this item's handle: RePEc:tpr:restat:v:80:y:1998:i:2:p:218-230. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Anna Pollock-Nelson)
If references are entirely missing, you can add them using this form.