Solving Stochastic Dynamic Programming Problems Using Rules Of Thumb
This paper develops a new method for constructing approximate solutions to discrete time, infinite horizon, discounted stochastic dynamic programming problems with convex choice sets. The key idea is to restrict the decision rule to belong to a parametric class of function. The agent then chooses the best decision rule from within this class. Monte Carlo simulations are used to calculate arbitrarily precise estimates of the optimal decision rule parameters. The solution method is used to solve a version of the Brock-Mirman (1972) stochastic optimal growth model. For this model, relatively simple rules of thumb provide very good approximations to optimal behavior.
|Date of creation:||May 1991|
|Date of revision:|
|Contact details of provider:|| Postal: Kingston, Ontario, K7L 3N6|
Phone: (613) 533-2250
Fax: (613) 533-6668
Web page: http://qed.econ.queensu.ca/
More information through EDIRC
When requesting a correction, please mention this item's handle: RePEc:qed:wpaper:816. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Mark Babcock)
If references are entirely missing, you can add them using this form.