IDEAS home Printed from https://ideas.repec.org/a/spr/mathme/v71y2010i3p401-425.html
   My bibliography  Save this article

Sensitivity analysis and optimal ultimately stationary deterministic policies in some constrained discounted cost models

Author

Listed:
  • Krishnamurthy Iyer
  • Nandyala Hemachandra

Abstract

We consider a discrete time Markov Decision Process (MDP) under the discounted payoff criterion in the presence of additional discounted cost constraints. We study the sensitivity of optimal Stationary Randomized (SR) policies in this setting with respect to the upper bound on the discounted cost constraint functionals. We show that such sensitivity analysis leads to an improved version of the Feinberg–Shwartz algorithm (Math Oper Res 21(4):922–945, 1996) for finding optimal policies that are ultimately stationary and deterministic. Copyright Springer-Verlag 2010

Suggested Citation

  • Krishnamurthy Iyer & Nandyala Hemachandra, 2010. "Sensitivity analysis and optimal ultimately stationary deterministic policies in some constrained discounted cost models," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 71(3), pages 401-425, June.
  • Handle: RePEc:spr:mathme:v:71:y:2010:i:3:p:401-425
    DOI: 10.1007/s00186-010-0303-8
    as

    Download full text from publisher

    File URL: http://hdl.handle.net/10.1007/s00186-010-0303-8
    Download Restriction: Access to full text is restricted to subscribers.

    File URL: https://libkey.io/10.1007/s00186-010-0303-8?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Eugene A. Feinberg & Adam Shwartz, 1996. "Constrained Discounted Dynamic Programming," Mathematics of Operations Research, INFORMS, vol. 21(4), pages 922-945, November.
    2. Eugene A. Feinberg & Adam Shwartz, 1995. "Constrained Markov Decision Models with Weighted Discounted Rewards," Mathematics of Operations Research, INFORMS, vol. 20(2), pages 302-320, May.
    3. Eugene A. Feinberg & Adam Shwartz, 1994. "Markov Decision Models with Weighted Discounted Criteria," Mathematics of Operations Research, INFORMS, vol. 19(1), pages 152-168, February.
    4. Stratton C. Jaquette, 1976. "A Utility Criterion for Markov Decision Processes," Management Science, INFORMS, vol. 23(1), pages 43-49, September.
    5. Cyrus Derman & Morton Klein, 1965. "Some Remarks on Finite Horizon Markovian Decision Models," Operations Research, INFORMS, vol. 13(2), pages 272-278, April.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Kumar, Uday M & Bhat, Sanjay P. & Kavitha, Veeraruna & Hemachandra, Nandyala, 2023. "Approximate solutions to constrained risk-sensitive Markov decision processes," European Journal of Operational Research, Elsevier, vol. 310(1), pages 249-267.
    2. Nandyala Hemachandra & Kamma Sri Naga Rajesh & Mohd. Abdul Qavi, 2016. "A model for equilibrium in some service-provider user-set interactions," Annals of Operations Research, Springer, vol. 243(1), pages 95-115, August.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Kumar, Uday M & Bhat, Sanjay P. & Kavitha, Veeraruna & Hemachandra, Nandyala, 2023. "Approximate solutions to constrained risk-sensitive Markov decision processes," European Journal of Operational Research, Elsevier, vol. 310(1), pages 249-267.
    2. Flesch, J. & Thuijsman, F. & Vrieze, O. J., 1999. "Average-discounted equilibria in stochastic games," European Journal of Operational Research, Elsevier, vol. 112(1), pages 187-195, January.
    3. Ohlmann, Jeffrey W. & Bean, James C., 2009. "Resource-constrained management of heterogeneous assets with stochastic deterioration," European Journal of Operational Research, Elsevier, vol. 199(1), pages 198-208, November.
    4. Eugene A. Feinberg, 2004. "Continuous Time Discounted Jump Markov Decision Processes: A Discrete-Event Approach," Mathematics of Operations Research, INFORMS, vol. 29(3), pages 492-524, August.
    5. Juan González-Hernández & Raquiel López-Martínez & J. Pérez-Hernández, 2007. "Markov control processes with randomized discounted cost," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 65(1), pages 27-44, February.
    6. J. Minjárez-Sosa, 2015. "Markov control models with unknown random state–action-dependent discount factors," TOP: An Official Journal of the Spanish Society of Statistics and Operations Research, Springer;Sociedad de Estadística e Investigación Operativa, vol. 23(3), pages 743-772, October.
    7. Pestien, Victor & Wang, Xiaobo, 1998. "Markov-achievable payoffs for finite-horizon decision models," Stochastic Processes and their Applications, Elsevier, vol. 73(1), pages 101-118, January.
    8. Sen Lin & Bo Li & Antonio Arreola-Risa & Yiwei Huang, 2023. "Optimizing a single-product production-inventory system under constant absolute risk aversion," TOP: An Official Journal of the Spanish Society of Statistics and Operations Research, Springer;Sociedad de Estadística e Investigación Operativa, vol. 31(3), pages 510-537, October.
    9. Lucy Gongtao Chen & Daniel Zhuoyu Long & Melvyn Sim, 2015. "On Dynamic Decision Making to Meet Consumption Targets," Operations Research, INFORMS, vol. 63(5), pages 1117-1130, October.
    10. Zeynep Erkin & Matthew D. Bailey & Lisa M. Maillart & Andrew J. Schaefer & Mark S. Roberts, 2010. "Eliciting Patients' Revealed Preferences: An Inverse Markov Decision Process Approach," Decision Analysis, INFORMS, vol. 7(4), pages 358-365, December.
    11. Nicole Bäuerle & Ulrich Rieder, 2014. "More Risk-Sensitive Markov Decision Processes," Mathematics of Operations Research, INFORMS, vol. 39(1), pages 105-120, February.
    12. Eugene A. Feinberg & Uriel G. Rothblum, 2012. "Splitting Randomized Stationary Policies in Total-Reward Markov Decision Processes," Mathematics of Operations Research, INFORMS, vol. 37(1), pages 129-153, February.
    13. Monahan, George E. & Sobel, Matthew J., 1997. "Risk-Sensitive Dynamic Market Share Attraction Games," Games and Economic Behavior, Elsevier, vol. 20(2), pages 149-160, August.
    14. Nandyala Hemachandra & Kamma Sri Naga Rajesh & Mohd. Abdul Qavi, 2016. "A model for equilibrium in some service-provider user-set interactions," Annals of Operations Research, Springer, vol. 243(1), pages 95-115, August.
    15. HuiChen Chiang, 2007. "Financial intermediary's choice of borrowing," Applied Economics, Taylor & Francis Journals, vol. 40(2), pages 251-260.
    16. Vladimir Ejov & Jerzy A. Filar & Michael Haythorpe & Giang T. Nguyen, 2009. "Refined MDP-Based Branch-and-Fix Algorithm for the Hamiltonian Cycle Problem," Mathematics of Operations Research, INFORMS, vol. 34(3), pages 758-768, August.
    17. Nielsen, Lars Relund & Kristensen, Anders Ringgaard, 2006. "Finding the K best policies in a finite-horizon Markov decision process," European Journal of Operational Research, Elsevier, vol. 175(2), pages 1164-1179, December.
    18. Mehmet U. S. Ayvaci & Oguzhan Alagoz & Elizabeth S. Burnside, 2012. "The Effect of Budgetary Restrictions on Breast Cancer Diagnostic Decisions," Manufacturing & Service Operations Management, INFORMS, vol. 14(4), pages 600-617, October.
    19. Łukasz Balbus & Kevin Reffett & Łukasz Woźny, 2015. "Time consistent Markov policies in dynamic economies with quasi-hyperbolic consumers," International Journal of Game Theory, Springer;Game Theory Society, vol. 44(1), pages 83-112, February.
    20. Rolando Cavazos-Cadena, 2009. "Solutions of the average cost optimality equation for finite Markov decision chains: risk-sensitive and risk-neutral criteria," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 70(3), pages 541-566, December.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:mathme:v:71:y:2010:i:3:p:401-425. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.