IDEAS home Printed from https://ideas.repec.org/a/inm/oropre/v58y2010i1p193-202.html
   My bibliography  Save this article

Acceleration Operators in the Value Iteration Algorithms for Markov Decision Processes

Author

Listed:
  • Oleksandr Shlakhter

    (Department of Mechanical and Industrial Engineering, University of Toronto, Toronto, Ontario M5S 3G8, Canada)

  • Chi-Guhn Lee

    (Department of Mechanical and Industrial Engineering, University of Toronto, Toronto, Ontario M5S 3G8, Canada)

  • Dmitry Khmelev

    (Deceased---Formerly with Department of Mathematics, University of Toronto, Toronto, Ontario, Canada)

  • Nasser Jaber

    (Department of Mechanical and Industrial Engineering, University of Toronto, Toronto, Ontario M5S 3G8, Canada)

Abstract

We study the general approach to accelerating the convergence of the most widely used solution method of Markov decision processes (MDPs) with the total expected discounted reward. Inspired by the monotone behavior of the contraction mappings in the feasible set of the linear programming problem equivalent to the MDP, we establish a class of operators that can be used in combination with a contraction mapping operator in the standard value iteration algorithm and its variants. We then propose two such operators, which can be easily implemented as part of the value iteration algorithm and its variants. Numerical studies show that the computational savings can be significant especially when the discount factor approaches one and the transition probability matrix becomes dense, in which the standard value iteration algorithm and its variants suffer from slow convergence.

Suggested Citation

  • Oleksandr Shlakhter & Chi-Guhn Lee & Dmitry Khmelev & Nasser Jaber, 2010. "Acceleration Operators in the Value Iteration Algorithms for Markov Decision Processes," Operations Research, INFORMS, vol. 58(1), pages 193-202, February.
  • Handle: RePEc:inm:oropre:v:58:y:2010:i:1:p:193-202
    DOI: 10.1287/opre.1090.0705
    as

    Download full text from publisher

    File URL: http://dx.doi.org/10.1287/opre.1090.0705
    Download Restriction: no

    File URL: https://libkey.io/10.1287/opre.1090.0705?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Herzberg, Meir & Yechiali, Uri, 1996. "A K-step look-ahead analysis of value iteration algorithms for Markov decision processes," European Journal of Operational Research, Elsevier, vol. 88(3), pages 622-636, February.
    2. Meir Herzberg & Uri Yechiali, 1994. "Accelerating Procedures of the Value Iteration Algorithm for Discounted Markov Decision Processes, Based on a One-Step Lookahead Analysis," Operations Research, INFORMS, vol. 42(5), pages 940-946, October.
    3. D. P. de Farias & B. Van Roy, 2003. "The Linear Programming Approach to Approximate Dynamic Programming," Operations Research, INFORMS, vol. 51(6), pages 850-865, December.
    4. Trick, Michael A. & Zin, Stanley E., 1997. "Spline Approximations To Value Functions," Macroeconomic Dynamics, Cambridge University Press, vol. 1(1), pages 255-277, January.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Oleksandr Shlakhter & Chi-Guhn Lee, 2013. "Accelerated modified policy iteration algorithms for Markov decision processes," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 78(1), pages 61-76, August.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Oleksandr Shlakhter & Chi-Guhn Lee, 2013. "Accelerated modified policy iteration algorithms for Markov decision processes," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 78(1), pages 61-76, August.
    2. Cai, Yongyang & Judd, Kenneth L. & Lontzek, Thomas S. & Michelangeli, Valentina & Su, Che-Lin, 2017. "A Nonlinear Programming Method For Dynamic Programming," Macroeconomic Dynamics, Cambridge University Press, vol. 21(2), pages 336-361, March.
    3. Qihang Lin & Selvaprabu Nadarajah & Negar Soheili, 2020. "Revisiting Approximate Linear Programming: Constraint-Violation Learning with Applications to Inventory Control and Energy Storage," Management Science, INFORMS, vol. 66(4), pages 1544-1562, April.
    4. Jiaqiao Hu & Michael C. Fu & Vahid R. Ramezani & Steven I. Marcus, 2007. "An Evolutionary Random Policy Search Algorithm for Solving Markov Decision Processes," INFORMS Journal on Computing, INFORMS, vol. 19(2), pages 161-174, May.
    5. Alejandro Toriello & William B. Haskell & Michael Poremba, 2014. "A Dynamic Traveling Salesman Problem with Stochastic Arc Costs," Operations Research, INFORMS, vol. 62(5), pages 1107-1125, October.
    6. Laumer, Simon & Barz, Christiane, 2023. "Reductions of non-separable approximate linear programs for network revenue management," European Journal of Operational Research, Elsevier, vol. 309(1), pages 252-270.
    7. Daniela Pucci de Farias & Benjamin Van Roy, 2006. "A Cost-Shaping Linear Program for Average-Cost Approximate Dynamic Programming with Performance Guarantees," Mathematics of Operations Research, INFORMS, vol. 31(3), pages 597-620, August.
    8. Daniela Pucci de Farias & Benjamin Van Roy, 2004. "On Constraint Sampling in the Linear Programming Approach to Approximate Dynamic Programming," Mathematics of Operations Research, INFORMS, vol. 29(3), pages 462-478, August.
    9. Selvaprabu Nadarajah & François Margot & Nicola Secomandi, 2015. "Relaxations of Approximate Linear Programs for the Real Option Management of Commodity Storage," Management Science, INFORMS, vol. 61(12), pages 3054-3076, December.
    10. Meissner, Joern & Strauss, Arne, 2012. "Network revenue management with inventory-sensitive bid prices and customer choice," European Journal of Operational Research, Elsevier, vol. 216(2), pages 459-468.
    11. Eike Nohdurft & Elisa Long & Stefan Spinler, 2017. "Was Angelina Jolie Right? Optimizing Cancer Prevention Strategies Among BRCA Mutation Carriers," Decision Analysis, INFORMS, vol. 14(3), pages 139-169, September.
    12. Somayeh Moazeni & Thomas F. Coleman & Yuying Li, 2016. "Smoothing and parametric rules for stochastic mean-CVaR optimal execution strategy," Annals of Operations Research, Springer, vol. 237(1), pages 99-120, February.
    13. Jaime González & Juan-Carlos Ferrer & Alejandro Cataldo & Luis Rojas, 2019. "A proactive transfer policy for critical patient flow management," Health Care Management Science, Springer, vol. 22(2), pages 287-303, June.
    14. Thomas W. M. Vossen & Dan Zhang, 2015. "Reductions of Approximate Linear Programs for Network Revenue Management," Operations Research, INFORMS, vol. 63(6), pages 1352-1371, December.
    15. Mathias A. Klapp & Alan L. Erera & Alejandro Toriello, 2018. "The One-Dimensional Dynamic Dispatch Waves Problem," Transportation Science, INFORMS, vol. 52(2), pages 402-415, March.
    16. Novoa, Clara & Storer, Robert, 2009. "An approximate dynamic programming approach for the vehicle routing problem with stochastic demands," European Journal of Operational Research, Elsevier, vol. 196(2), pages 509-515, July.
    17. Höfferl, F. & Steinschorn, D., 2009. "A dynamic programming extension to the steady state refinery-LP," European Journal of Operational Research, Elsevier, vol. 197(2), pages 465-474, September.
    18. Diego Klabjan & Daniel Adelman, 2007. "An Infinite-Dimensional Linear Programming Algorithm for Deterministic Semi-Markov Decision Processes on Borel Spaces," Mathematics of Operations Research, INFORMS, vol. 32(3), pages 528-550, August.
    19. Matthew S. Maxwell & Mateo Restrepo & Shane G. Henderson & Huseyin Topaloglu, 2010. "Approximate Dynamic Programming for Ambulance Redeployment," INFORMS Journal on Computing, INFORMS, vol. 22(2), pages 266-281, May.
    20. Sauré, Antoine & Patrick, Jonathan & Tyldesley, Scott & Puterman, Martin L., 2012. "Dynamic multi-appointment patient scheduling for radiation therapy," European Journal of Operational Research, Elsevier, vol. 223(2), pages 573-584.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:inm:oropre:v:58:y:2010:i:1:p:193-202. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Chris Asher (email available below). General contact details of provider: https://edirc.repec.org/data/inforea.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.