Stochastic Algorithms for Dynamic Models: Markov Perfect Equilibrium, and the 'Curse' of Dimensionality
This paper provides an algorithm for computing policies for dynamic economic models whose state vectors evolve as ergodic Markov processes. The algorithm can be described as a simple learning process (one that agents might actually use). It has two features which break the relationship between its computational requirements and the dimension of the model's state space. First the integral over future states needed to determine policies is never calculated; rather it is estimated by a simple average of past outcomes. Second, the algorithm never computes policies at all points. Iterations are defined by a location and only policies at that location are computed. Random draws from the distribution determined by those policies determine the next location. This selection only repeatedly hits the recurrent class of points, a subset of the feasible set whose cardinality is not directly tied to the dimension of the state space. Our motivating example is Markov Perfect Equilibria (a leading model of industry dynamics; see Maskin and Tirole, 1988). Though estimators for the primitive parameters of these models are often available, computational problems have made it difficult to use them in applied analysis. We provide numerical results which show that our algorithm can be several orders of magnitude faster than standard algorithms in this case; opening up new possibilities for applied work.
|Date of creation:||Jan 1997|
|Date of revision:|
|Publication status:||Published in Econometrica (2001), 69(5): 1261-1281|
|Contact details of provider:|| Postal: |
Phone: (203) 432-3702
Fax: (203) 432-6167
Web page: http://cowles.econ.yale.edu/
More information through EDIRC
|Order Information:|| Postal: Cowles Foundation, Yale University, Box 208281, New Haven, CT 06520-8281 USA|
When requesting a correction, please mention this item's handle: RePEc:cwl:cwldpp:1144. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Glena Ames)
If references are entirely missing, you can add them using this form.