An Approximated Solution to Continuous-Time Stochastic Optimal Control Problems Through Markov Decision Chains
Strategies for constructing a Markov decision chain approximating a continuous-time finite-horizon optimal control problem are investigated. Some simple, analytically soluble, examples are treated and low computational complexity is reported. Extensions to the method and implementation are discussed. In particular, relevance of the approximated solution to a stochastic renewable resource valuation problem is examined.
|Date of creation:||01 Oct 1997|
|Date of revision:|
|Note:||Type of Document - LaTeX; prepared on UNIX; to print on PostScript; pages: 38 ; figures: included|
|Contact details of provider:|| Web page: http://188.8.131.52|
When requesting a correction, please mention this item's handle: RePEc:wpa:wuwpco:9710001. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (EconWPA)
If references are entirely missing, you can add them using this form.