IDEAS home Printed from https://ideas.repec.org/a/inm/oropre/v53y2005i5p780-798.html
   My bibliography  Save this article

Robust Control of Markov Decision Processes with Uncertain Transition Matrices

Author

Listed:
  • Arnab Nilim

    (Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, California 94720)

  • Laurent El Ghaoui

    (Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, California 94720)

Abstract

Optimal solutions to Markov decision problems may be very sensitive with respect to the state transition probabilities. In many practical problems, the estimation of these probabilities is far from accurate. Hence, estimation errors are limiting factors in applying Markov decision processes to real-world problems.We consider a robust control problem for a finite-state, finite-action Markov decision process, where uncertainty on the transition matrices is described in terms of possibly nonconvex sets. We show that perfect duality holds for this problem, and that as a consequence, it can be solved with a variant of the classical dynamic programming algorithm, the “robust dynamic programming” algorithm. We show that a particular choice of the uncertainty sets, involving likelihood regions or entropy bounds, leads to both a statistically accurate representation of uncertainty, and a complexity of the robust recursion that is almost the same as that of the classical recursion. Hence, robustness can be added at practically no extra computing cost. We derive similar results for other uncertainty sets, including one with a finite number of possible values for the transition matrices.We describe in a practical path planning example the benefits of using a robust strategy instead of the classical optimal strategy; even if the uncertainty level is only crudely guessed, the robust strategy yields a much better worst-case expected travel time.

Suggested Citation

  • Arnab Nilim & Laurent El Ghaoui, 2005. "Robust Control of Markov Decision Processes with Uncertain Transition Matrices," Operations Research, INFORMS, vol. 53(5), pages 780-798, October.
  • Handle: RePEc:inm:oropre:v:53:y:2005:i:5:p:780-798
    DOI: 10.1287/opre.1050.0216
    as

    Download full text from publisher

    File URL: http://dx.doi.org/10.1287/opre.1050.0216
    Download Restriction: no

    File URL: https://libkey.io/10.1287/opre.1050.0216?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Chelsea C. White & Hany K. Eldeib, 1994. "Markov Decision Processes with Imprecise Transition Probabilities," Operations Research, INFORMS, vol. 42(4), pages 739-749, August.
    2. Nowak, Andrzej S. & Szajowski, Krzysztof, 1998. "Nonzero-sum Stochastic Games," MPRA Paper 19995, University Library of Munich, Germany, revised 1999.
    3. Jay K. Satia & Roy E. Lave, 1973. "Markovian Decision Processes with Uncertain Transition Probabilities," Operations Research, INFORMS, vol. 21(3), pages 728-740, June.
    4. J. K. Satia & R. E. Lave, 1973. "Markovian Decision Processes with Probabilistic Observation of States," Management Science, INFORMS, vol. 20(1), pages 1-13, September.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Zeynep Turgay & Fikri Karaesmen & Egemen Lerzan Örmeci, 2018. "Structural properties of a class of robust inventory and queueing control problems," Naval Research Logistics (NRL), John Wiley & Sons, vol. 65(8), pages 699-716, December.
    2. Andrew J. Keith & Darryl K. Ahner, 2021. "A survey of decision making and optimization under uncertainty," Annals of Operations Research, Springer, vol. 300(2), pages 319-353, May.
    3. Garud N. Iyengar, 2005. "Robust Dynamic Programming," Mathematics of Operations Research, INFORMS, vol. 30(2), pages 257-280, May.
    4. Wolfram Wiesemann & Daniel Kuhn & Berç Rustem, 2013. "Robust Markov Decision Processes," Mathematics of Operations Research, INFORMS, vol. 38(1), pages 153-183, February.
    5. V Varagapriya & Vikas Vikram Singh & Abdel Lisser, 2023. "Joint chance-constrained Markov decision processes," Annals of Operations Research, Springer, vol. 322(2), pages 1013-1035, March.
    6. Hyeong Chang, 2006. "Perfect information two-person zero-sum markov games with imprecise transition probabilities," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 64(2), pages 335-351, October.
    7. Peter Buchholz & Dimitri Scheftelowitsch, 2019. "Computation of weighted sums of rewards for concurrent MDPs," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 89(1), pages 1-42, February.
    8. Schapaugh, Adam W. & Tyre, Andrew J., 2013. "Accounting for parametric uncertainty in Markov decision processes," Ecological Modelling, Elsevier, vol. 254(C), pages 15-21.
    9. Wolfram Wiesemann & Daniel Kuhn & Berç Rustem, 2010. "Robust Markov Decision Processes," Working Papers 034, COMISEF.
    10. Blanc, J.P.C. & den Hertog, D., 2008. "On Markov Chains with Uncertain Data," Other publications TiSEM b44dfb0a-1676-4ce3-8d16-f, Tilburg University, School of Economics and Management.
    11. David L. Kaufman & Andrew J. Schaefer, 2013. "Robust Modified Policy Iteration," INFORMS Journal on Computing, INFORMS, vol. 25(3), pages 396-410, August.
    12. Erick Delage & Shie Mannor, 2010. "Percentile Optimization for Markov Decision Processes with Parameter Uncertainty," Operations Research, INFORMS, vol. 58(1), pages 203-213, February.
    13. D. Škulj & R. Hable, 2013. "Coefficients of ergodicity for Markov chains with uncertain parameters," Metrika: International Journal for Theoretical and Applied Statistics, Springer, vol. 76(1), pages 107-133, January.
    14. Zhu, Zhicheng & Xiang, Yisha & Zhao, Ming & Shi, Yue, 2023. "Data-driven remanufacturing planning with parameter uncertainty," European Journal of Operational Research, Elsevier, vol. 309(1), pages 102-116.
    15. Zeynep Turgay & Fikri Karaesmen & E. Örmeci, 2015. "A dynamic inventory rationing problem with uncertain demand and production rates," Annals of Operations Research, Springer, vol. 231(1), pages 207-228, August.
    16. Salah eddine Semati & Abdelkader Gasmi, 2023. "Markov interval chain (MIC) for solving a decision problem," OPSEARCH, Springer;Operational Research Society of India, vol. 60(2), pages 802-811, June.
    17. Abhijit Gosavi, 2009. "Reinforcement Learning: A Tutorial Survey and Recent Advances," INFORMS Journal on Computing, INFORMS, vol. 21(2), pages 178-192, May.
    18. Xiaoting Ji & Yifeng Niu & Lincheng Shen, 2016. "Robust Satisficing Decision Making for Unmanned Aerial Vehicle Complex Missions under Severe Uncertainty," PLOS ONE, Public Library of Science, vol. 11(11), pages 1-35, November.
    19. Felipe Caro & Aparupa Das Gupta, 2022. "Robust control of the multi-armed bandit problem," Annals of Operations Research, Springer, vol. 317(2), pages 461-480, October.
    20. Erim Kardeş & Fernando Ordóñez & Randolph W. Hall, 2011. "Discounted Robust Stochastic Games and an Application to Queueing Control," Operations Research, INFORMS, vol. 59(2), pages 365-382, April.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:inm:oropre:v:53:y:2005:i:5:p:780-798. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Chris Asher (email available below). General contact details of provider: https://edirc.repec.org/data/inforea.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.