IDEAS home Printed from https://ideas.repec.org/a/inm/ormoor/v43y2018i3p1025-1050.html
   My bibliography  Save this article

Characterization of the Optimal Risk-Sensitive Average Cost in Denumerable Markov Decision Chains

Author

Listed:
  • Rolando Cavazos-Cadena

    (Departamento de Estadística y Cálculo, Universidad Autónoma Agraria Antonio Narro, Buenavista, Saltillo, Coahuila 25315, México)

Abstract

This work is concerned with Markov decision chains on a denumerable state space. The controller has a positive risk-sensitivity coefficient, and the performance of a control policy is measured by a risk-sensitive average cost criterion. Besides standard continuity-compactness conditions, it is assumed that the state process is communicating under any stationary policy, and that the simultaneous Doeblin condition holds. In this context, it is shown that if the cost function is bounded from below, and the superior limit average index is finite at some point, then (i) the optimal superior and inferior limit average value functions coincide and are constant, (ii) the optimal average cost is characterized via an extended version of the Collatz-Wielandt formula in the theory of positive matrices, and (iii) an optimality inequality is established, from which a stationary optimal policy is obtained. Moreover, an explicit example is given to show that, even if the cost function is bounded, the strict inequality may occur in the optimality relation.

Suggested Citation

  • Rolando Cavazos-Cadena, 2018. "Characterization of the Optimal Risk-Sensitive Average Cost in Denumerable Markov Decision Chains," Mathematics of Operations Research, INFORMS, vol. 43(3), pages 1025-1050, August.
  • Handle: RePEc:inm:ormoor:v:43:y:2018:i:3:p:1025-1050
    DOI: 10.1287/moor.2017.0893
    as

    Download full text from publisher

    File URL: https://doi.org/10.1287/moor.2017.0893
    Download Restriction: no

    File URL: https://libkey.io/10.1287/moor.2017.0893?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Pelin Canbolat, 2014. "Optimal halting policies in Markov population decision chains with constant risk posture," Annals of Operations Research, Springer, vol. 222(1), pages 227-237, November.
    2. Lukasz Stettner, 1999. "Risk sensitive portfolio optimization," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 50(3), pages 463-474, December.
    3. Nicole Bäuerle & Ulrich Rieder, 2014. "More Risk-Sensitive Markov Decision Processes," Mathematics of Operations Research, INFORMS, vol. 39(1), pages 105-120, February.
    4. Ronald A. Howard & James E. Matheson, 1972. "Risk-Sensitive Markov Decision Processes," Management Science, INFORMS, vol. 18(7), pages 356-369, March.
    5. Rolando Cavazos-Cadena, 2009. "Solutions of the average cost optimality equation for finite Markov decision chains: risk-sensitive and risk-neutral criteria," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 70(3), pages 541-566, December.
    6. V. S. Borkar & S. P. Meyn, 2002. "Risk-Sensitive Optimal Control for Markov Decision Processes with Monotone Cost," Mathematics of Operations Research, INFORMS, vol. 27(1), pages 192-209, February.
    7. Marcin Pitera & Łukasz Stettner, 2016. "Long run risk sensitive portfolio with general factors," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 83(2), pages 265-293, April.
    8. Balaji, S. & Meyn, S. P., 2000. "Multiplicative ergodicity and large deviations for an irreducible Markov chain," Stochastic Processes and their Applications, Elsevier, vol. 90(1), pages 123-144, November.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Qingda Wei & Xian Chen, 2023. "Continuous-Time Markov Decision Processes Under the Risk-Sensitive First Passage Discounted Cost Criterion," Journal of Optimization Theory and Applications, Springer, vol. 197(1), pages 309-333, April.
    2. Gustavo Portillo-Ramírez & Rolando Cavazos-Cadena & Hugo Cruz-Suárez, 2023. "Contractive approximations in average Markov decision chains driven by a risk-seeking controller," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 98(1), pages 75-91, August.
    3. Julio Saucedo-Zul & Rolando Cavazos-Cadena & Hugo Cruz-Suárez, 2020. "A Discounted Approach in Communicating Average Markov Decision Chains Under Risk-Aversion," Journal of Optimization Theory and Applications, Springer, vol. 187(2), pages 585-606, November.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Julio Saucedo-Zul & Rolando Cavazos-Cadena & Hugo Cruz-Suárez, 2020. "A Discounted Approach in Communicating Average Markov Decision Chains Under Risk-Aversion," Journal of Optimization Theory and Applications, Springer, vol. 187(2), pages 585-606, November.
    2. Gustavo Portillo-Ramírez & Rolando Cavazos-Cadena & Hugo Cruz-Suárez, 2023. "Contractive approximations in average Markov decision chains driven by a risk-seeking controller," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 98(1), pages 75-91, August.
    3. Rubén Blancas-Rivera & Rolando Cavazos-Cadena & Hugo Cruz-Suárez, 2020. "Discounted approximations in risk-sensitive average Markov cost chains with finite state space," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 91(2), pages 241-268, April.
    4. Carlos Camilo-Garay & Rolando Cavazos-Cadena & Hugo Cruz-Suárez, 2022. "Contractive Approximations in Risk-Sensitive Average Semi-Markov Decision Chains on a Finite State Space," Journal of Optimization Theory and Applications, Springer, vol. 192(1), pages 271-291, January.
    5. Bäuerle, Nicole & Rieder, Ulrich, 2017. "Zero-sum risk-sensitive stochastic games," Stochastic Processes and their Applications, Elsevier, vol. 127(2), pages 622-642.
    6. Anna Ja'skiewicz, 2007. "Average optimality for risk-sensitive control with general state space," Papers 0704.0394, arXiv.org.
    7. Arnab Basu & Mrinal K. Ghosh, 2018. "Nonzero-Sum Risk-Sensitive Stochastic Games on a Countable State Space," Mathematics of Operations Research, INFORMS, vol. 43(2), pages 516-532, May.
    8. Ghosh, Mrinal K. & Golui, Subrata & Pal, Chandan & Pradhan, Somnath, 2023. "Discrete-time zero-sum games for Markov chains with risk-sensitive average cost criterion," Stochastic Processes and their Applications, Elsevier, vol. 158(C), pages 40-74.
    9. Arnab Basu & Tirthankar Bhattacharyya & Vivek S. Borkar, 2008. "A Learning Algorithm for Risk-Sensitive Cost," Mathematics of Operations Research, INFORMS, vol. 33(4), pages 880-898, November.
    10. Basu, Arnab & Ghosh, Mrinal Kanti, 2014. "Zero-sum risk-sensitive stochastic games on a countable state space," Stochastic Processes and their Applications, Elsevier, vol. 124(1), pages 961-983.
    11. Bhabak, Arnab & Saha, Subhamay, 2022. "Risk-sensitive semi-Markov decision problems with discounted cost and general utilities," Statistics & Probability Letters, Elsevier, vol. 184(C).
    12. Guglielmo D’Amico & Fulvio Gismondi & Jacques Janssen & Raimondo Manca, 2015. "Discrete Time Homogeneous Markov Processes for the Study of the Basic Risk Processes," Methodology and Computing in Applied Probability, Springer, vol. 17(4), pages 983-998, December.
    13. Qingda Wei & Xian Chen, 2021. "Nonzero-sum Risk-Sensitive Average Stochastic Games: The Case of Unbounded Costs," Dynamic Games and Applications, Springer, vol. 11(4), pages 835-862, December.
    14. Arapostathis, Ari & Biswas, Anup, 2018. "Infinite horizon risk-sensitive control of diffusions without any blanket stability assumptions," Stochastic Processes and their Applications, Elsevier, vol. 128(5), pages 1485-1524.
    15. Naci Saldi & Tamer Bas¸ ar & Maxim Raginsky, 2020. "Approximate Markov-Nash Equilibria for Discrete-Time Risk-Sensitive Mean-Field Games," Mathematics of Operations Research, INFORMS, vol. 45(4), pages 1596-1620, November.
    16. Rolando Cavazos-Cadena & Daniel Hernández-Hernández, 2011. "Discounted Approximations for Risk-Sensitive Average Criteria in Markov Decision Chains with Finite State Space," Mathematics of Operations Research, INFORMS, vol. 36(1), pages 133-146, February.
    17. Nicole Bäauerle & Ulrich Rieder, 2017. "Partially Observable Risk-Sensitive Markov Decision Processes," Mathematics of Operations Research, INFORMS, vol. 42(4), pages 1180-1196, November.
    18. Nicole Bäuerle & Alexander Glauner, 2021. "Minimizing spectral risk measures applied to Markov decision processes," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 94(1), pages 35-69, August.
    19. Rolando Cavazos-Cadena & Raúl Montes-de-Oca, 2003. "The Value Iteration Algorithm in Risk-Sensitive Average Markov Decision Chains with Finite State Space," Mathematics of Operations Research, INFORMS, vol. 28(4), pages 752-776, November.
    20. Karel Sladký, 2013. "Risk-Sensitive and Mean Variance Optimality in Markov Decision Processes," Czech Economic Review, Charles University Prague, Faculty of Social Sciences, Institute of Economic Studies, vol. 7(3), pages 146-161, November.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:inm:ormoor:v:43:y:2018:i:3:p:1025-1050. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Chris Asher (email available below). General contact details of provider: https://edirc.repec.org/data/inforea.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.