Robust Learning Stability with Operational Monetary Policy Rules
We consider “robust stability” of a rational expectations equilibrium, which we define as stability under discounted (constant gain) least-squares learning, for a range of gain parameters. We find that for operational forms of policy rules, i.e. rules that do not depend on contemporaneous values of endogenous aggregate variables, many interest-rate rules do not exhibit robust stability. We consider a variety of interest-rate rules, including instrument rules, optimal reaction functions under discretion or commitment, and rules that approximate optimal policy under commitment. For some reaction functions we allow for an interest-rate stabilization motive in the policy objective. The expectations-based rules proposed in Evans and Honkapohja (2003, 2006) deliver robust learning stability. In contrast, many proposed alternatives become unstable under learning even at small values of the gain parameter.
|Date of creation:|
|Date of revision:|
|Contact details of provider:|| Postal: |
Phone: 01334 462420
Fax: 01334 462444
Web page: http://www.st-andrews.ac.uk/cdma
More information through EDIRC
Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:
- James Bullard & Kaushik Mitra, 2002.
"Learning about monetary policy rules,"
2000-001, Federal Reserve Bank of St. Louis.
When requesting a correction, please mention this item's handle: RePEc:san:cdmacp:0808. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Bram Boskamp)
If references are entirely missing, you can add them using this form.