IDEAS home Printed from https://ideas.repec.org/a/spr/mathme/v62y2005i3p387-397.html
   My bibliography  Save this article

On mean reward variance in semi-Markov processes

Author

Listed:
  • Karel Sladký

Abstract

As an extension of the discrete-time case, this note investigates the variance of the total cumulative reward for the embedded Markov chain of semi-Markov processes. Under the assumption that the chain is aperiodic and contains a single class of recurrent states recursive formulae for the variance are obtained which show that the variance growth rate is asymptotically linear in time. Expressions are provided to compute this growth rate. Copyright Springer-Verlag 2005

Suggested Citation

  • Karel Sladký, 2005. "On mean reward variance in semi-Markov processes," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 62(3), pages 387-397, December.
  • Handle: RePEc:spr:mathme:v:62:y:2005:i:3:p:387-397
    DOI: 10.1007/s00186-005-0039-z
    as

    Download full text from publisher

    File URL: http://hdl.handle.net/10.1007/s00186-005-0039-z
    Download Restriction: Access to full text is restricted to subscribers.

    File URL: https://libkey.io/10.1007/s00186-005-0039-z?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Ying Huang & L. C. M. Kallenberg, 1994. "On Finding Optimal Policies for Markov Decision Chains: A Unifying Framework for Mean-Variance-Tradeoffs," Mathematics of Operations Research, INFORMS, vol. 19(2), pages 434-448, May.
    2. Kawai, Hajime, 1987. "A variance minimization problem for a Markov decision process," European Journal of Operational Research, Elsevier, vol. 31(1), pages 140-145, July.
    3. Jerzy A. Filar & L. C. M. Kallenberg & Huey-Miin Lee, 1989. "Variance-Penalized Markov Decision Processes," Mathematics of Operations Research, INFORMS, vol. 14(1), pages 147-161, February.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Alessandro Arlotto & Noah Gans & J. Michael Steele, 2014. "Markov Decision Problems Where Means Bound Variances," Operations Research, INFORMS, vol. 62(4), pages 864-875, August.
    2. Karel Sladký, 2013. "Risk-Sensitive and Mean Variance Optimality in Markov Decision Processes," Czech Economic Review, Charles University Prague, Faculty of Social Sciences, Institute of Economic Studies, vol. 7(3), pages 146-161, November.
    3. Dmitry Krass & O. J. Vrieze, 2002. "Achieving Target State-Action Frequencies in Multichain Average-Reward Markov Decision Processes," Mathematics of Operations Research, INFORMS, vol. 27(3), pages 545-566, August.
    4. Mannor, Shie & Tsitsiklis, John N., 2013. "Algorithmic aspects of mean–variance optimization in Markov decision processes," European Journal of Operational Research, Elsevier, vol. 231(3), pages 645-653.
    5. Kang Boda & Jerzy Filar, 2006. "Time Consistent Dynamic Risk Measures," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 63(1), pages 169-186, February.
    6. Jingnan Fan & Andrzej Ruszczyński, 2018. "Risk measurement and risk-averse control of partially observable discrete-time Markov systems," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 88(2), pages 161-184, October.
    7. Jingnan Fan & Andrzej Ruszczynski, 2014. "Process-Based Risk Measures and Risk-Averse Control of Discrete-Time Systems," Papers 1411.2675, arXiv.org, revised Nov 2016.
    8. Gordon B. Hazen, 2022. "Augmenting Markov Cohort Analysis to Compute (Co)Variances: Implications for Strength of Cost-Effectiveness," INFORMS Journal on Computing, INFORMS, vol. 34(6), pages 3170-3180, November.
    9. Ma, Shuai & Ma, Xiaoteng & Xia, Li, 2023. "A unified algorithm framework for mean-variance optimization in discounted Markov decision processes," European Journal of Operational Research, Elsevier, vol. 311(3), pages 1057-1067.
    10. Kumar, Uday M & Bhat, Sanjay P. & Kavitha, Veeraruna & Hemachandra, Nandyala, 2023. "Approximate solutions to constrained risk-sensitive Markov decision processes," European Journal of Operational Research, Elsevier, vol. 310(1), pages 249-267.
    11. Özlem Çavuş & Andrzej Ruszczyński, 2014. "Computational Methods for Risk-Averse Undiscounted Transient Markov Models," Operations Research, INFORMS, vol. 62(2), pages 401-417, April.
    12. Krasimira Kovachka & Tihomira Zlatanova & Desislava Lyubenova, 2015. "Analysis Of Economic Efficiency Of Municipal Hospital," Economy & Business Journal, International Scientific Publications, Bulgaria, vol. 9(1), pages 594-600.
    13. Shie Mannor & Duncan Simester & Peng Sun & John N. Tsitsiklis, 2007. "Bias and Variance Approximation in Value Function Estimates," Management Science, INFORMS, vol. 53(2), pages 308-322, February.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:mathme:v:62:y:2005:i:3:p:387-397. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.