IDEAS home Printed from https://ideas.repec.org/a/inm/ormnsc/v23y1977i8p890-900.html
   My bibliography  Save this article

Discounting, Ergodicity and Convergence for Markov Decision Processes

Author

Listed:
  • Thomas E. Morton

    (Carnegie-Mellon University)

  • William E. Wecker

    (University of Chicago)

Abstract

The rate at which Markov decision processes converge as the horizon length increases can be important for computations and judging the appropriateness of models. The convergence rate is commonly associated with the discount factor \alpha . For example, the total value function for a broad set of problems is known to converge 0(\alpha n ), i.e., geometrically with the discount factor. But the rate at which the finite horizon optimal policies converge depends on the convergence of the relative value function. (Relative value at a given state is the difference between total value at that state and total value at some fixed reference state.) Relative value convergence in turn depends both on the discount factor and on ergodic properties of the underlying nonhomogeneous Markov chains. We show in particular that for the stationary finite state space compact action space Markov decision problem, the relative value function converges 0((\alpha \lambda) n ) for all \lambda > r(P), the argument of the subdominant eigenvalue of the optimal infinite horizon policy (assumed unique). Easily obtained bounds for r(P) are also given which are related to those of A. Brauer. Under additional restrictions, policy convergence is shown to be of the same order as relative value convergence, generalizing work of Shapiro, Schweitzer, and Odoni. The same result gives convergence properties for the undiscounted problem and for the case \alpha > 1. If \alpha r(P) > 1 the problem does not converge. As a by-product of the analysis, necessary conditions are given for the relative value function to converge 0((\alpha \lambda) n ), 0

Suggested Citation

  • Thomas E. Morton & William E. Wecker, 1977. "Discounting, Ergodicity and Convergence for Markov Decision Processes," Management Science, INFORMS, vol. 23(8), pages 890-900, April.
  • Handle: RePEc:inm:ormnsc:v:23:y:1977:i:8:p:890-900
    DOI: 10.1287/mnsc.23.8.890
    as

    Download full text from publisher

    File URL: http://dx.doi.org/10.1287/mnsc.23.8.890
    Download Restriction: no

    File URL: https://libkey.io/10.1287/mnsc.23.8.890?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. James T. Treharne & Charles R. Sox, 2002. "Adaptive Inventory Control for Nonstationary Demand and Partial Information," Management Science, INFORMS, vol. 48(5), pages 607-624, May.
    2. Iida, Tetsuo, 1999. "The infinite horizon non-stationary stochastic inventory problem: Near myopic policies and weak ergodicity," European Journal of Operational Research, Elsevier, vol. 116(2), pages 405-422, July.
    3. Robert L. Bray, 2019. "Markov Decision Processes with Exogenous Variables," Management Science, INFORMS, vol. 65(10), pages 4598-4606, October.
    4. Suresh Chand & Vernon Ning Hsu & Suresh Sethi, 2002. "Forecast, Solution, and Rolling Horizons in Operations Management Problems: A Classified Bibliography," Manufacturing & Service Operations Management, INFORMS, vol. 4(1), pages 25-43, September.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:inm:ormnsc:v:23:y:1977:i:8:p:890-900. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Chris Asher (email available below). General contact details of provider: https://edirc.repec.org/data/inforea.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.