IDEAS home Printed from https://ideas.repec.org/a/inm/ormoor/v45y2020i2p517-546.html
   My bibliography  Save this article

Randomized Linear Programming Solves the Markov Decision Problem in Nearly Linear (Sometimes Sublinear) Time

Author

Listed:
  • Mengdi Wang

    (Department of Operations Research and Financial Engineering, Princeton University, Princeton, New Jersey 08540)

Abstract

We propose a novel randomized linear programming algorithm for approximating the optimal policy of the discounted-reward and average-reward Markov decision problems. By leveraging the value–policy duality, the algorithm adaptively samples state–action–state transitions and makes exponentiated primal–dual updates. We show that it finds an ɛ -optimal policy using nearly linear runtime in the worst case for a fixed value of the discount factor. When the Markov decision process is ergodic and specified in some special data formats, for fixed values of certain ergodicity parameters, the algorithm finds an ɛ -optimal policy using sample size and time linear in the total number of state–action pairs, which is sublinear in the input size. These results provide a new venue and complexity benchmarks for solving stochastic dynamic programs.

Suggested Citation

  • Mengdi Wang, 2020. "Randomized Linear Programming Solves the Markov Decision Problem in Nearly Linear (Sometimes Sublinear) Time," Mathematics of Operations Research, INFORMS, vol. 45(2), pages 517-546, May.
  • Handle: RePEc:inm:ormoor:v:45:y:2020:i:2:p:517-546
    DOI: 10.1287/moor.2019.1000
    as

    Download full text from publisher

    File URL: https://doi.org/10.1287/moor.2019.1000
    Download Restriction: no

    File URL: https://libkey.io/10.1287/moor.2019.1000?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Yinyu Ye, 2011. "The Simplex and Policy-Iteration Methods Are Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate," Mathematics of Operations Research, INFORMS, vol. 36(4), pages 593-603, November.
    2. D. P. de Farias & B. Van Roy, 2003. "The Linear Programming Approach to Approximate Dynamic Programming," Operations Research, INFORMS, vol. 51(6), pages 850-865, December.
    3. Yinyu Ye, 2005. "A New Complexity Result on Solving the Markov Decision Problem," Mathematics of Operations Research, INFORMS, vol. 30(3), pages 733-749, August.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Archis Ghate & Robert L. Smith, 2013. "A Linear Programming Approach to Nonstationary Infinite-Horizon Markov Decision Processes," Operations Research, INFORMS, vol. 61(2), pages 413-425, April.
    2. Guy Even & Alexander Zadorojniy, 2012. "Strong polynomiality of the Gass-Saaty shadow-vertex pivoting rule for controlled random walks," Annals of Operations Research, Springer, vol. 201(1), pages 159-167, December.
    3. Ian Post & Yinyu Ye, 2015. "The Simplex Method is Strongly Polynomial for Deterministic Markov Decision Processes," Mathematics of Operations Research, INFORMS, vol. 40(4), pages 859-868, October.
    4. Guillot, Matthieu & Stauffer, Gautier, 2020. "The Stochastic Shortest Path Problem: A polyhedral combinatorics perspective," European Journal of Operational Research, Elsevier, vol. 285(1), pages 148-158.
    5. Meissner, Joern & Strauss, Arne, 2012. "Network revenue management with inventory-sensitive bid prices and customer choice," European Journal of Operational Research, Elsevier, vol. 216(2), pages 459-468.
    6. Eike Nohdurft & Elisa Long & Stefan Spinler, 2017. "Was Angelina Jolie Right? Optimizing Cancer Prevention Strategies Among BRCA Mutation Carriers," Decision Analysis, INFORMS, vol. 14(3), pages 139-169, September.
    7. Somayeh Moazeni & Thomas F. Coleman & Yuying Li, 2016. "Smoothing and parametric rules for stochastic mean-CVaR optimal execution strategy," Annals of Operations Research, Springer, vol. 237(1), pages 99-120, February.
    8. Jaime González & Juan-Carlos Ferrer & Alejandro Cataldo & Luis Rojas, 2019. "A proactive transfer policy for critical patient flow management," Health Care Management Science, Springer, vol. 22(2), pages 287-303, June.
    9. Thomas W. M. Vossen & Dan Zhang, 2015. "Reductions of Approximate Linear Programs for Network Revenue Management," Operations Research, INFORMS, vol. 63(6), pages 1352-1371, December.
    10. Mathias A. Klapp & Alan L. Erera & Alejandro Toriello, 2018. "The One-Dimensional Dynamic Dispatch Waves Problem," Transportation Science, INFORMS, vol. 52(2), pages 402-415, March.
    11. Novoa, Clara & Storer, Robert, 2009. "An approximate dynamic programming approach for the vehicle routing problem with stochastic demands," European Journal of Operational Research, Elsevier, vol. 196(2), pages 509-515, July.
    12. Höfferl, F. & Steinschorn, D., 2009. "A dynamic programming extension to the steady state refinery-LP," European Journal of Operational Research, Elsevier, vol. 197(2), pages 465-474, September.
    13. Alexander Zadorojniy & Guy Even & Adam Shwartz, 2009. "A Strongly Polynomial Algorithm for Controlled Queues," Mathematics of Operations Research, INFORMS, vol. 34(4), pages 992-1007, November.
    14. Diego Klabjan & Daniel Adelman, 2007. "An Infinite-Dimensional Linear Programming Algorithm for Deterministic Semi-Markov Decision Processes on Borel Spaces," Mathematics of Operations Research, INFORMS, vol. 32(3), pages 528-550, August.
    15. Matthew S. Maxwell & Mateo Restrepo & Shane G. Henderson & Huseyin Topaloglu, 2010. "Approximate Dynamic Programming for Ambulance Redeployment," INFORMS Journal on Computing, INFORMS, vol. 22(2), pages 266-281, May.
    16. Oleksandr Shlakhter & Chi-Guhn Lee, 2013. "Accelerated modified policy iteration algorithms for Markov decision processes," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 78(1), pages 61-76, August.
    17. Sauré, Antoine & Patrick, Jonathan & Tyldesley, Scott & Puterman, Martin L., 2012. "Dynamic multi-appointment patient scheduling for radiation therapy," European Journal of Operational Research, Elsevier, vol. 223(2), pages 573-584.
    18. Jalaj Bhandari & Daniel Russo & Raghav Singal, 2021. "A Finite Time Analysis of Temporal Difference Learning with Linear Function Approximation," Operations Research, INFORMS, vol. 69(3), pages 950-973, May.
    19. Cai, Yongyang & Judd, Kenneth L. & Lontzek, Thomas S. & Michelangeli, Valentina & Su, Che-Lin, 2017. "A Nonlinear Programming Method For Dynamic Programming," Macroeconomic Dynamics, Cambridge University Press, vol. 21(2), pages 336-361, March.
    20. Nikolaos E. Pratikakis & Matthew J. Realff & Jay H. Lee, 2010. "Strategic capacity decision‐making in a stochastic manufacturing environment using real‐time approximate dynamic programming," Naval Research Logistics (NRL), John Wiley & Sons, vol. 57(3), pages 211-224, April.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:inm:ormoor:v:45:y:2020:i:2:p:517-546. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Chris Asher (email available below). General contact details of provider: https://edirc.repec.org/data/inforea.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.