IDEAS home Printed from https://ideas.repec.org/a/inm/ormnsc/v24y1978i11p1127-1137.html
   My bibliography  Save this article

Modified Policy Iteration Algorithms for Discounted Markov Decision Problems

Author

Listed:
  • Martin L. Puterman

    (University of British Columbia)

  • Moon Chirl Shin

    (University of British Columbia)

Abstract

In this paper we study a class of modified policy iteration algorithms for solving Markov decision problems. These correspond to performing policy evaluation by successive approximations. We discuss the relationship of these algorithms to Newton-Kantorovich iteration and demonstrate their covergence. We show that all of these algorithms converge at least as quickly as successive approximations and obtain estimates of their rates of convergence. An analysis of the computational requirements of these algorithms suggests that they may be appropriate for solving problems with either large numbers of actions, large numbers of states, sparse transition matrices, or small discount rates. These algorithms are compared to policy iteration, successive approximations, and Gauss-Seidel methods on large randomly generated test problems.

Suggested Citation

  • Martin L. Puterman & Moon Chirl Shin, 1978. "Modified Policy Iteration Algorithms for Discounted Markov Decision Problems," Management Science, INFORMS, vol. 24(11), pages 1127-1137, July.
  • Handle: RePEc:inm:ormnsc:v:24:y:1978:i:11:p:1127-1137
    DOI: 10.1287/mnsc.24.11.1127
    as

    Download full text from publisher

    File URL: http://dx.doi.org/10.1287/mnsc.24.11.1127
    Download Restriction: no

    File URL: https://libkey.io/10.1287/mnsc.24.11.1127?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Heer Burkhard & Maußner Alfred, 2011. "Value Function Iteration as a Solution Method for the Ramsey Model," Journal of Economics and Statistics (Jahrbuecher fuer Nationaloekonomie und Statistik), De Gruyter, vol. 231(4), pages 494-515, August.
    2. Mauro Gaggero & Giorgio Gnecco & Marcello Sanguineti, 2014. "Approximate dynamic programming for stochastic N-stage optimization with application to optimal consumption under uncertainty," Computational Optimization and Applications, Springer, vol. 58(1), pages 31-85, May.
    3. Mauro Gaggero & Giorgio Gnecco & Marcello Sanguineti, 2013. "Dynamic Programming and Value-Function Approximation in Sequential Decision Problems: Error Analysis and Numerical Results," Journal of Optimization Theory and Applications, Springer, vol. 156(2), pages 380-416, February.
    4. Mercedes Esteban-Bravo & Jose M. Vidal-Sanz & Gökhan Yildirim, 2014. "Valuing Customer Portfolios with Endogenous Mass and Direct Marketing Interventions Using a Stochastic Dynamic Programming Decomposition," Marketing Science, INFORMS, vol. 33(5), pages 621-640, September.
    5. David L. Kaufman & Andrew J. Schaefer, 2013. "Robust Modified Policy Iteration," INFORMS Journal on Computing, INFORMS, vol. 25(3), pages 396-410, August.
    6. Gabriel Zayas‐Cabán & Emmett J. Lodree & David L. Kaufman, 2020. "Optimal Control of Parallel Queues for Managing Volunteer Convergence," Production and Operations Management, Production and Operations Management Society, vol. 29(10), pages 2268-2288, October.
    7. Phelan, Thomas & Eslami, Keyvan, 2022. "Applications of Markov chain approximation methods to optimal control problems in economics," Journal of Economic Dynamics and Control, Elsevier, vol. 143(C).
    8. Keyvan Eslami & Tom Phelan, 2023. "The Art of Temporal Approximation An Investigation into Numerical Solutions to Discrete and Continuous-Time Problems in Economics," Working Papers 23-10, Federal Reserve Bank of Cleveland.
    9. Herzberg, Meir & Yechiali, Uri, 1996. "A K-step look-ahead analysis of value iteration algorithms for Markov decision processes," European Journal of Operational Research, Elsevier, vol. 88(3), pages 622-636, February.
    10. Pelin Canbolat & Uriel Rothblum, 2013. "(Approximate) iterated successive approximations algorithm for sequential decision processes," Annals of Operations Research, Springer, vol. 208(1), pages 309-320, September.
    11. Gabriel Zayas-Cabán & Mark E. Lewis, 2020. "Admission control in a two-class loss system with periodically varying parameters and abandonments," Queueing Systems: Theory and Applications, Springer, vol. 94(1), pages 175-210, February.
    12. Oleksandr Shlakhter & Chi-Guhn Lee, 2013. "Accelerated modified policy iteration algorithms for Markov decision processes," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 78(1), pages 61-76, August.
    13. Detlef Seese & Christof Weinhardt & Frank Schlottmann (ed.), 2008. "Handbook on Information Technology in Finance," International Handbooks on Information Systems, Springer, number 978-3-540-49487-4, November.
    14. Keyvan Eslami & Tom Phelan, 2021. "Applications of Markov Chain Approximation Methods to Optimal Control Problems in Economics," Working Papers 21-04R, Federal Reserve Bank of Cleveland, revised 17 May 2022.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:inm:ormnsc:v:24:y:1978:i:11:p:1127-1137. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Chris Asher (email available below). General contact details of provider: https://edirc.repec.org/data/inforea.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.