A Utility Criterion for Markov Decision Processes
AbstractOptimality criteria for Markov decision processes have historically been based on a risk neutral formulation of the decision maker's preferences. An explicit utility formulation, incorporating both risk and time preference and based on some results in the axiomatic theory of choice under uncertainty, is developed. This forms an optimality criterion called utility optimality with constant aversion to risk. The objective is to maximize the expected utility using an exponential utility function. Implicit in the formulation is an interpretation of the decision process which is not sequential. It is shown that optimal policies exist which are not necessarily stationary for an infinite horizon stationary Markov decision process with finite state and action spaces. An example is given.
Download InfoIf you experience problems downloading a file, check if you have the proper application to view it first. In case of further problems read the IDEAS help page. Note that these files are not on the IDEAS site. Please be patient as the files may be large.
Bibliographic InfoArticle provided by INFORMS in its journal Management Science.
Volume (Year): 23 (1976)
Issue (Month): 1 (September)
You can help add them by filling out this form.
CitEc Project, subscribe to its RSS feed for this item.
- Takayuki Osogami, 2012. "Iterated risk measures for risk-sensitive Markov decision processes with discounted cost," Papers 1202.3755, arXiv.org.
- Monahan, George E. & Sobel, Matthew J., 1997. "Risk-Sensitive Dynamic Market Share Attraction Games," Games and Economic Behavior, Elsevier, vol. 20(2), pages 149-160, August.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Mirko Janc).
If references are entirely missing, you can add them using this form.