Adaptive Learning in Stochastic Nonlinear Models When Shocks Follow a Markov Chain
Local convergence results for adaptive learning of stochastic steady states in nonlinear models are extended to the case where the exogenous observable variables follow a ?nite Markov chain. The stability conditions for the corresponding nonstochastic model and its steady states yield convergence for the stochastic model when shocks are suf?ciently small. The results are applied to asset pricing and to an overlapping generations model. Large shocks can destabilize learning even if the steady state is stable with small shocks.
|Date of creation:|
|Date of revision:||Apr 2003|
|Contact details of provider:|| Postal: Øster Farimagsgade 5, Building 26, DK-1353 Copenhagen K., Denmark|
Phone: (+45) 35 32 30 10
Fax: +45 35 32 30 00
Web page: http://www.econ.ku.dk
More information through EDIRC
When requesting a correction, please mention this item's handle: RePEc:kud:kuiedp:0322. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Thomas Hoffmann)
If references are entirely missing, you can add them using this form.