IDEAS home Printed from https://ideas.repec.org/a/eee/ejores/v238y2014i2p486-496.html
   My bibliography  Save this article

Convergence of controlled models and finite-state approximation for discounted continuous-time Markov decision processes with constraints

Author

Listed:
  • Guo, Xianping
  • Zhang, Wenzhao

Abstract

In this paper we consider the convergence of a sequence {Mn} of the models of discounted continuous-time constrained Markov decision processes (MDP) to the “limit” one, denoted by M∞. For the models with denumerable states and unbounded transition rates, under reasonably mild conditions we prove that the (constrained) optimal policies and the optimal values of {Mn} converge to those of M∞, respectively, using a technique of occupation measures. As an application of the convergence result developed here, we show that an optimal policy and the optimal value for countable-state continuous-time MDP can be approximated by those of finite-state continuous-time MDP. Finally, we further illustrate such finite-state approximation by solving numerically a controlled birth-and-death system and also give the corresponding error bound of the approximation.

Suggested Citation

  • Guo, Xianping & Zhang, Wenzhao, 2014. "Convergence of controlled models and finite-state approximation for discounted continuous-time Markov decision processes with constraints," European Journal of Operational Research, Elsevier, vol. 238(2), pages 486-496.
  • Handle: RePEc:eee:ejores:v:238:y:2014:i:2:p:486-496
    DOI: 10.1016/j.ejor.2014.03.037
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0377221714002768
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.ejor.2014.03.037?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Xianping Guo & Alexei Piunovskiy, 2011. "Discounted Continuous-Time Markov Decision Processes with Constraints: Unbounded Transition and Loss Rates," Mathematics of Operations Research, INFORMS, vol. 36(1), pages 105-132, February.
    2. Cervellera, C. & Macciò, D., 2011. "A comparison of global and semi-local approximation in T-stage stochastic optimization," European Journal of Operational Research, Elsevier, vol. 208(2), pages 109-118, January.
    3. Jorge Alvarez-Mena & Onésimo Hernández-Lerma, 2002. "Convergence of the optimal values of constrained Markov control processes," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 55(3), pages 461-484, June.
    4. Jorge Alvarez-Mena & Onésimo Hernández-Lerma, 2002. "Convergence of the optimal values of constrained Markov control processes," The Annals of Regional Science, Springer;Western Regional Science Association, vol. 55(3), pages 461-484, June.
    5. Jorge Alvarez-Mena & Onésimo Hernández-Lerma, 2006. "Existence of nash equilibria for constrained stochastic games," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 63(2), pages 261-285, May.
    6. Eugene A. Feinberg, 2000. "Constrained Discounted Markov Decision Processes and Hamiltonian Cycles," Mathematics of Operations Research, INFORMS, vol. 25(1), pages 130-140, February.
    7. Eugene A. Feinberg, 2004. "Continuous Time Discounted Jump Markov Decision Processes: A Discrete-Event Approach," Mathematics of Operations Research, INFORMS, vol. 29(3), pages 492-524, August.
    8. Alexey Piunovskiy & Yi Zhang, 2012. "The Transformation Method for Continuous-Time Markov Decision Processes," Journal of Optimization Theory and Applications, Springer, vol. 154(2), pages 691-712, August.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Tomás Prieto-Rumeau & José Lorenzo, 2015. "Approximation of zero-sum continuous-time Markov games under the discounted payoff criterion," TOP: An Official Journal of the Spanish Society of Statistics and Operations Research, Springer;Sociedad de Estadística e Investigación Operativa, vol. 23(3), pages 799-836, October.
    2. Qingda Wei, 2016. "Continuous-time Markov decision processes with risk-sensitive finite-horizon cost criterion," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 84(3), pages 461-487, December.
    3. Qingda Wei, 2017. "Finite approximation for finite-horizon continuous-time Markov decision processes," 4OR, Springer, vol. 15(1), pages 67-84, March.
    4. Ping Cao & Jingui Xie, 2016. "Optimal control of a multiclass queueing system when customers can change types," Queueing Systems: Theory and Applications, Springer, vol. 82(3), pages 285-313, April.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Wenzhao Zhang, 2019. "Discrete-Time Constrained Average Stochastic Games with Independent State Processes," Mathematics, MDPI, vol. 7(11), pages 1-18, November.
    2. Lanlan Zhang & Xianping Guo, 2008. "Constrained continuous-time Markov decision processes with average criteria," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 67(2), pages 323-340, April.
    3. Yonghui Huang & Qingda Wei & Xianping Guo, 2013. "Constrained Markov decision processes with first passage criteria," Annals of Operations Research, Springer, vol. 206(1), pages 197-219, July.
    4. Ping Cao & Jingui Xie, 2016. "Optimal control of a multiclass queueing system when customers can change types," Queueing Systems: Theory and Applications, Springer, vol. 82(3), pages 285-313, April.
    5. Alexey Piunovskiy & Yi Zhang, 2012. "The Transformation Method for Continuous-Time Markov Decision Processes," Journal of Optimization Theory and Applications, Springer, vol. 154(2), pages 691-712, August.
    6. Xianping Guo & Yi Zhang, 2016. "Optimality of Mixed Policies for Average Continuous-Time Markov Decision Processes with Constraints," Mathematics of Operations Research, INFORMS, vol. 41(4), pages 1276-1296, November.
    7. Zéphyr, Luckny & Lang, Pascal & Lamond, Bernard F. & Côté, Pascal, 2017. "Approximate stochastic dynamic programming for hydroelectric production planning," European Journal of Operational Research, Elsevier, vol. 262(2), pages 586-601.
    8. Yonghui Huang & Xianping Guo, 2020. "Multiconstrained Finite-Horizon Piecewise Deterministic Markov Decision Processes with Unbounded Transition Rates," Mathematics of Operations Research, INFORMS, vol. 45(2), pages 641-659, May.
    9. Xianping Guo, 2007. "Continuous-Time Markov Decision Processes with Discounted Rewards: The Case of Polish Spaces," Mathematics of Operations Research, INFORMS, vol. 32(1), pages 73-87, February.
    10. Eugene A. Feinberg & Uriel G. Rothblum, 2012. "Splitting Randomized Stationary Policies in Total-Reward Markov Decision Processes," Mathematics of Operations Research, INFORMS, vol. 37(1), pages 129-153, February.
    11. Hermans, Ben & Leus, Roel & Looy, Bart Van, 2023. "Deciding on scheduling, secrecy, and patenting during the new product development process: The relevance of project planning models," Omega, Elsevier, vol. 116(C).
    12. Xianping Guo & Alexei Piunovskiy, 2011. "Discounted Continuous-Time Markov Decision Processes with Constraints: Unbounded Transition and Loss Rates," Mathematics of Operations Research, INFORMS, vol. 36(1), pages 105-132, February.
    13. Yi Zhang, 2013. "Convex analytic approach to constrained discounted Markov decision processes with non-constant discount factors," TOP: An Official Journal of the Spanish Society of Statistics and Operations Research, Springer;Sociedad de Estadística e Investigación Operativa, vol. 21(2), pages 378-408, July.
    14. Vladimir Ejov & Jerzy A. Filar & Michael Haythorpe & Giang T. Nguyen, 2009. "Refined MDP-Based Branch-and-Fix Algorithm for the Hamiltonian Cycle Problem," Mathematics of Operations Research, INFORMS, vol. 34(3), pages 758-768, August.
    15. Ali Eshragh & Jerzy Filar & Michael Haythorpe, 2011. "A hybrid simulation-optimization algorithm for the Hamiltonian cycle problem," Annals of Operations Research, Springer, vol. 189(1), pages 103-125, September.
    16. Vivek Borkar & Jerzy Filar, 2013. "Markov chains, Hamiltonian cycles and volumes of convex bodies," Journal of Global Optimization, Springer, vol. 55(3), pages 633-639, March.
    17. Ali Eshragh & Jerzy Filar, 2011. "Hamiltonian Cycles, Random Walks, and Discounted Occupational Measures," Mathematics of Operations Research, INFORMS, vol. 36(2), pages 258-270, May.
    18. Huang, Yonghui & Guo, Xianping, 2011. "Finite horizon semi-Markov decision processes with application to maintenance systems," European Journal of Operational Research, Elsevier, vol. 212(1), pages 131-140, July.
    19. Jun Fei & Eugene Feinberg, 2013. "Variance minimization for constrained discounted continuous-time MDPs with exponentially distributed stopping times," Annals of Operations Research, Springer, vol. 208(1), pages 433-450, September.
    20. Ali Eshragh & Jerzy A. Filar & Thomas Kalinowski & Sogol Mohammadian, 2020. "Hamiltonian Cycles and Subsets of Discounted Occupational Measures," Mathematics of Operations Research, INFORMS, vol. 45(2), pages 713-731, May.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:ejores:v:238:y:2014:i:2:p:486-496. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/locate/eor .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.