IDEAS home Printed from https://ideas.repec.org/a/spr/mathme/v56y2002i2p181-196.html
   My bibliography  Save this article

Value iteration and approximately optimal stationary policies in finite-state average Markov decision chains

Author

Listed:
  • Rolando Cavazos-Cadena
  • Rolando Cavazos-Cadena

Abstract

This work concerns finte-state Markov decision chains endowed with the long-run average reward criterion. Assuming that the optimality equation has a solution, it is shown that a nearly optimal stationary policy, as well as an approximation to the optimal average reward within a specified error, can be obtained in a finite number of steps of the value iteration method. These results extend others already available in the literature, which were established under more stringent restrictions on the ergodic structure of the decision process. Copyright Springer-Verlag Berlin Heidelberg 2002

Suggested Citation

  • Rolando Cavazos-Cadena & Rolando Cavazos-Cadena, 2002. "Value iteration and approximately optimal stationary policies in finite-state average Markov decision chains," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 56(2), pages 181-196, November.
  • Handle: RePEc:spr:mathme:v:56:y:2002:i:2:p:181-196
    DOI: 10.1007/s001860200205
    as

    Download full text from publisher

    File URL: http://hdl.handle.net/10.1007/s001860200205
    Download Restriction: Access to full text is restricted to subscribers.

    File URL: https://libkey.io/10.1007/s001860200205?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Karel Sladký, 2013. "Risk-Sensitive and Mean Variance Optimality in Markov Decision Processes," Czech Economic Review, Charles University Prague, Faculty of Social Sciences, Institute of Economic Studies, vol. 7(3), pages 146-161, November.
    2. Rolando Cavazos-Cadena & Raúl Montes-de-Oca, 2003. "The Value Iteration Algorithm in Risk-Sensitive Average Markov Decision Chains with Finite State Space," Mathematics of Operations Research, INFORMS, vol. 28(4), pages 752-776, November.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:mathme:v:56:y:2002:i:2:p:181-196. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.