IDEAS home Printed from https://ideas.repec.org/a/inm/oropre/v11y1963i6p949-971.html
   My bibliography  Save this article

Markov-Renewal Programming. II: Infinite Return Models, Example

Author

Listed:
  • William S. Jewell

    (Operations Research Center and Department of Industrial Engineering, University of California, Berkeley)

Abstract

This paper is a continuation of a previous one which investigates programming over a Markov-renewal process---in which the intervals between transitions of a system from state i to state j are independent samples from a distribution that may depend upon both i and j . Given a reward structure, and a decision mechanism that influences both the rewards and the Markov-renewal process, the problem is to select alternatives at each transition so as to maximize total expected reward. The first portion of the paper investigated various finite-return models. In this part of the paper, we investigate the infinite-return models, where it becomes necessary to consider only stationary policies that maximize the dominant term in the reward. It is then important to specify whether the limiting experiment is (I) undiscounted, with the number of transitions n (rightarrow) (infinity), (II) undiscounted, with a time horizon t (rightarrow) (infinity), or (III) discounted, infinite n or t , with discount factor a (rightarrow) 0. In each case, a limiting form for the total expected reward is shown, and an algorithm developed to maximize the rate of return. The problem of finding the optimal or near-optimal policies in the case of ties is still computationally unresolved. Extensions to nonergodic processes are indicated, and special results for the two-state process are presented. Finally, an example of machine maintenance and repair is used to illustrate the generality of the models and the special problems that may arise.

Suggested Citation

  • William S. Jewell, 1963. "Markov-Renewal Programming. II: Infinite Return Models, Example," Operations Research, INFORMS, vol. 11(6), pages 949-971, December.
  • Handle: RePEc:inm:oropre:v:11:y:1963:i:6:p:949-971
    DOI: 10.1287/opre.11.6.949
    as

    Download full text from publisher

    File URL: http://dx.doi.org/10.1287/opre.11.6.949
    Download Restriction: no

    File URL: https://libkey.io/10.1287/opre.11.6.949?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Prasenjit Mondal, 2020. "Computing semi-stationary optimal policies for multichain semi-Markov decision processes," Annals of Operations Research, Springer, vol. 287(2), pages 843-865, April.
    2. Tony Haitao Cui & Jagmohan S. Raju & Z. John Zhang, 2008. "A Price Discrimination Model of Trade Promotions," Marketing Science, INFORMS, vol. 27(5), pages 779-795, 09-10.
    3. Chris P. Lee & Glenn M. Chertow & Stefanos A. Zenios, 2008. "Optimal Initiation and Management of Dialysis Therapy," Operations Research, INFORMS, vol. 56(6), pages 1428-1449, December.
    4. Prasenjit Mondal, 2015. "Linear Programming and Zero-Sum Two-Person Undiscounted Semi-Markov Games," Asia-Pacific Journal of Operational Research (APJOR), World Scientific Publishing Co. Pte. Ltd., vol. 32(06), pages 1-20, December.
    5. Nooshin Salari & Viliam Makis, 2020. "Application of Markov renewal theory and semi‐Markov decision processes in maintenance modeling and optimization of multi‐unit systems," Naval Research Logistics (NRL), John Wiley & Sons, vol. 67(7), pages 548-558, October.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:inm:oropre:v:11:y:1963:i:6:p:949-971. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Chris Asher (email available below). General contact details of provider: https://edirc.repec.org/data/inforea.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.