IDEAS home Printed from https://ideas.repec.org/a/fau/aucocz/au2013_146.html
   My bibliography  Save this article

Risk-Sensitive and Mean Variance Optimality in Markov Decision Processes

Author

Listed:
  • Karel Sladký

    (Academy of Sciences of the Czech Republic, Institute of Information Theory and Automation, Department of Econometrics, Prague, Czech Republic)

Abstract

In this paper we consider unichain Markov decision processes with finite state space and compact actions spaces where the stream of rewards generated by the Markov processes is evaluated by an exponential utility function with a given risk sensitivity coefficient (so-called risk-sensitive models). If the risk sensitivity coefficient equals zero (risk-neutral case) we arrive at a standard Markov decision process. Then we can easily obtain necessary and sufficient mean reward optimality conditions and the variability can be evaluated by the mean variance of total expected rewards. For the risk-sensitive case we establish necessary and sufficient optimality conditions for maximal (or minimal) growth rate of expectation of the exponential utility function, along with mean value of the corresponding certainty equivalent, that take into account not only the expected values of the total reward but also its higher moments.

Suggested Citation

  • Karel Sladký, 2013. "Risk-Sensitive and Mean Variance Optimality in Markov Decision Processes," Czech Economic Review, Charles University Prague, Faculty of Social Sciences, Institute of Economic Studies, vol. 7(3), pages 146-161, November.
  • Handle: RePEc:fau:aucocz:au2013_146
    as

    Download full text from publisher

    File URL: http://auco.cuni.cz/mag/article/download/id/150/type/attachment
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Stratton C. Jaquette, 1976. "A Utility Criterion for Markov Decision Processes," Management Science, INFORMS, vol. 23(1), pages 43-49, September.
    2. Rolando Cavazos-Cadena, 2003. "Solution to the risk-sensitive average cost optimality equation in a class of Markov decision processes with finite state space," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 57(2), pages 263-285, May.
    3. Rolando Cavazos–Cadena & Daniel Hernández–Hernández, 2004. "A characterization of exponential functionals in finite Markov chains," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 60(3), pages 399-414, December.
    4. Kawai, Hajime, 1987. "A variance minimization problem for a Markov decision process," European Journal of Operational Research, Elsevier, vol. 31(1), pages 140-145, July.
    5. Rolando Cavazos-Cadena & Emmanuel Fernández-Gaucherand, 1999. "Controlled Markov chains with risk-sensitive criteria: Average cost, optimality equations, and optimal solutions," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 49(2), pages 299-324, April.
    6. Rolando Cavazos-Cadena & Rolando Cavazos-Cadena, 2002. "Value iteration and approximately optimal stationary policies in finite-state average Markov decision chains," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 56(2), pages 181-196, November.
    7. Jerzy A. Filar & L. C. M. Kallenberg & Huey-Miin Lee, 1989. "Variance-Penalized Markov Decision Processes," Mathematics of Operations Research, INFORMS, vol. 14(1), pages 147-161, February.
    8. Harry Markowitz, 1952. "Portfolio Selection," Journal of Finance, American Finance Association, vol. 7(1), pages 77-91, March.
    9. Rolando Cavazos-Cadena & Daniel Hernández-Hernández, 2003. "Solution to the risk-sensitive average optimality equation in communicating Markov decision chains with finite state space: An alternative approach," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 56(3), pages 473-479, January.
    10. Ying Huang & L. C. M. Kallenberg, 1994. "On Finding Optimal Policies for Markov Decision Chains: A Unifying Framework for Mean-Variance-Tradeoffs," Mathematics of Operations Research, INFORMS, vol. 19(2), pages 434-448, May.
    11. Ronald A. Howard & James E. Matheson, 1972. "Risk-Sensitive Markov Decision Processes," Management Science, INFORMS, vol. 18(7), pages 356-369, March.
    12. Rolando Cavazos-Cadena & Raúl Montes-de-Oca, 2003. "The Value Iteration Algorithm in Risk-Sensitive Average Markov Decision Chains with Finite State Space," Mathematics of Operations Research, INFORMS, vol. 28(4), pages 752-776, November.
    13. V. S. Borkar & S. P. Meyn, 2002. "Risk-Sensitive Optimal Control for Markov Decision Processes with Monotone Cost," Mathematics of Operations Research, INFORMS, vol. 27(1), pages 192-209, February.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Rolando Cavazos-Cadena, 2009. "Solutions of the average cost optimality equation for finite Markov decision chains: risk-sensitive and risk-neutral criteria," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 70(3), pages 541-566, December.
    2. Rolando Cavazos-Cadena & Daniel Hernández-Hernández, 2011. "Discounted Approximations for Risk-Sensitive Average Criteria in Markov Decision Chains with Finite State Space," Mathematics of Operations Research, INFORMS, vol. 36(1), pages 133-146, February.
    3. Rolando Cavazos-Cadena & Raúl Montes-de-Oca, 2003. "The Value Iteration Algorithm in Risk-Sensitive Average Markov Decision Chains with Finite State Space," Mathematics of Operations Research, INFORMS, vol. 28(4), pages 752-776, November.
    4. Selene Chávez-Rodríguez & Rolando Cavazos-Cadena & Hugo Cruz-Suárez, 2015. "Continuity of the optimal average cost in Markov decision chains with small risk-sensitivity," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 81(3), pages 269-298, June.
    5. Daniel Hernández Hernández & Diego Hernández Bustos, 2017. "Local Poisson Equations Associated with Discrete-Time Markov Control Processes," Journal of Optimization Theory and Applications, Springer, vol. 173(1), pages 1-29, April.
    6. Özlem Çavuş & Andrzej Ruszczyński, 2014. "Computational Methods for Risk-Averse Undiscounted Transient Markov Models," Operations Research, INFORMS, vol. 62(2), pages 401-417, April.
    7. Basu, Arnab & Ghosh, Mrinal Kanti, 2014. "Zero-sum risk-sensitive stochastic games on a countable state space," Stochastic Processes and their Applications, Elsevier, vol. 124(1), pages 961-983.
    8. Karel Sladký, 2005. "On mean reward variance in semi-Markov processes," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 62(3), pages 387-397, December.
    9. Alessandro Arlotto & Noah Gans & J. Michael Steele, 2014. "Markov Decision Problems Where Means Bound Variances," Operations Research, INFORMS, vol. 62(4), pages 864-875, August.
    10. Rolando Cavazos-Cadena, 2010. "Optimality equations and inequalities in a class of risk-sensitive average cost Markov decision chains," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 71(1), pages 47-84, February.
    11. Gustavo Portillo-Ramírez & Rolando Cavazos-Cadena & Hugo Cruz-Suárez, 2023. "Contractive approximations in average Markov decision chains driven by a risk-seeking controller," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 98(1), pages 75-91, August.
    12. Kumar, Uday M & Bhat, Sanjay P. & Kavitha, Veeraruna & Hemachandra, Nandyala, 2023. "Approximate solutions to constrained risk-sensitive Markov decision processes," European Journal of Operational Research, Elsevier, vol. 310(1), pages 249-267.
    13. Arnab Basu & Mrinal K. Ghosh, 2018. "Nonzero-Sum Risk-Sensitive Stochastic Games on a Countable State Space," Mathematics of Operations Research, INFORMS, vol. 43(2), pages 516-532, May.
    14. Lucy Gongtao Chen & Daniel Zhuoyu Long & Melvyn Sim, 2015. "On Dynamic Decision Making to Meet Consumption Targets," Operations Research, INFORMS, vol. 63(5), pages 1117-1130, October.
    15. Bäuerle, Nicole & Rieder, Ulrich, 2017. "Zero-sum risk-sensitive stochastic games," Stochastic Processes and their Applications, Elsevier, vol. 127(2), pages 622-642.
    16. Guglielmo D’Amico & Fulvio Gismondi & Jacques Janssen & Raimondo Manca, 2015. "Discrete Time Homogeneous Markov Processes for the Study of the Basic Risk Processes," Methodology and Computing in Applied Probability, Springer, vol. 17(4), pages 983-998, December.
    17. Zeynep Erkin & Matthew D. Bailey & Lisa M. Maillart & Andrew J. Schaefer & Mark S. Roberts, 2010. "Eliciting Patients' Revealed Preferences: An Inverse Markov Decision Process Approach," Decision Analysis, INFORMS, vol. 7(4), pages 358-365, December.
    18. Nicole Bäuerle & Ulrich Rieder, 2014. "More Risk-Sensitive Markov Decision Processes," Mathematics of Operations Research, INFORMS, vol. 39(1), pages 105-120, February.
    19. Monahan, George E. & Sobel, Matthew J., 1997. "Risk-Sensitive Dynamic Market Share Attraction Games," Games and Economic Behavior, Elsevier, vol. 20(2), pages 149-160, August.
    20. V. S. Borkar & S. P. Meyn, 2002. "Risk-Sensitive Optimal Control for Markov Decision Processes with Monotone Cost," Mathematics of Operations Research, INFORMS, vol. 27(1), pages 192-209, February.

    More about this item

    Keywords

    Discrete-time Markov decision chains; exponential utility functions; certainty equivalent; mean-variance optimality; connections between risk-sensitive and risk-neutral models;
    All these keywords.

    JEL classification:

    • C44 - Mathematical and Quantitative Methods - - Econometric and Statistical Methods: Special Topics - - - Operations Research; Statistical Decision Theory
    • C61 - Mathematical and Quantitative Methods - - Mathematical Methods; Programming Models; Mathematical and Simulation Modeling - - - Optimization Techniques; Programming Models; Dynamic Analysis

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:fau:aucocz:au2013_146. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Lenka Stastna (email available below). General contact details of provider: https://edirc.repec.org/data/icunicz.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.