IDEAS home Printed from https://ideas.repec.org/p/vua/wpaper/1989-65.html
   My bibliography  Save this paper

Truncation of Markov decision problems with a queueing network overflow control application

Author

Listed:
  • Dijk, N.M. van

    (Vrije Universiteit Amsterdam, Faculteit der Economische Wetenschappen en Econometrie (Free University Amsterdam, Faculty of Economics Sciences, Business Administration and Economitrics)

Abstract

No abstract is available for this item.

Suggested Citation

  • Dijk, N.M. van, 1989. "Truncation of Markov decision problems with a queueing network overflow control application," Serie Research Memoranda 0065, VU University Amsterdam, Faculty of Economics, Business Administration and Econometrics.
  • Handle: RePEc:vua:wpaper:1989-65
    as

    Download full text from publisher

    File URL: http://degree.ubvu.vu.nl/repec/vua/wpaper/pdf/19890065.pdf
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Amedeo R. Odoni, 1969. "On Finding the Maximal Gain for Markov Decision Processes," Operations Research, INFORMS, vol. 17(5), pages 857-860, October.
    2. A. Hordijk & L. C. M. Kallenberg, 1979. "Linear Programming and Markov Decision Chains," Management Science, INFORMS, vol. 25(4), pages 352-362, April.
    3. Dijk, N.M. van, 1988. "Approximate uniformization for continuous-time Markov chains with an application to performability analysis," Serie Research Memoranda 0054, VU University Amsterdam, Faculty of Economics, Business Administration and Econometrics.
    4. A. Hordijk & L. C. M. Kallenberg, 1984. "Constrained Undiscounted Stochastic Dynamic Programming," Mathematics of Operations Research, INFORMS, vol. 9(2), pages 276-289, May.
    5. Martin L. Puterman & Moon Chirl Shin, 1982. "Action Elimination Procedures for Modified Policy Iteration Algorithms," Operations Research, INFORMS, vol. 30(2), pages 301-318, April.
    6. Ward Whitt, 1978. "Approximations of Dynamic Programs, I," Mathematics of Operations Research, INFORMS, vol. 3(3), pages 231-243, August.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Lodewijk Kallenberg, 2013. "Derman’s book as inspiration: some results on LP for MDPs," Annals of Operations Research, Springer, vol. 208(1), pages 63-94, September.
    2. Dmitry Krass & O. J. Vrieze, 2002. "Achieving Target State-Action Frequencies in Multichain Average-Reward Markov Decision Processes," Mathematics of Operations Research, INFORMS, vol. 27(3), pages 545-566, August.
    3. Dijk, N.M. van, 1989. "The importance of bias-terms for error bounds and comparison results," Serie Research Memoranda 0036, VU University Amsterdam, Faculty of Economics, Business Administration and Econometrics.
    4. Pelin Canbolat & Uriel Rothblum, 2013. "(Approximate) iterated successive approximations algorithm for sequential decision processes," Annals of Operations Research, Springer, vol. 208(1), pages 309-320, September.
    5. Vivek S. Borkar & Vladimir Gaitsgory, 2019. "Linear Programming Formulation of Long-Run Average Optimal Control Problem," Journal of Optimization Theory and Applications, Springer, vol. 181(1), pages 101-125, April.
    6. Jérôme Renault & Xavier Venel, 2017. "Long-Term Values in Markov Decision Processes and Repeated Games, and a New Distance for Probability Spaces," Mathematics of Operations Research, INFORMS, vol. 42(2), pages 349-376, May.
    7. N. M. Van Dijk & K. Sladký, 1999. "Error Bounds for Nonnegative Dynamic Models," Journal of Optimization Theory and Applications, Springer, vol. 101(2), pages 449-474, May.
    8. Silvia Florio & Wolfgang Runggaldier, 1999. "On hedging in finite security markets," Applied Mathematical Finance, Taylor & Francis Journals, vol. 6(3), pages 159-176.
    9. Mabel M. TIDBALL & Eitan ALTMAN, 1994. "Approximations In Dynamic Zero-Sum Games," Game Theory and Information 9401001, University Library of Munich, Germany.
    10. Tetsuichiro Iki & Masayuki Horiguchi & Masami Kurano, 2007. "A structured pattern matrix algorithm for multichain Markov decision processes," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 66(3), pages 545-555, December.
    11. Dellaert, N. P. & Melo, M. T., 1996. "Production strategies for a stochastic lot-sizing problem with constant capacity," European Journal of Operational Research, Elsevier, vol. 92(2), pages 281-301, July.
    12. Eitan Altman & Konstantin Avrachenkov & Richard Marquez & Gregory Miller, 2005. "Zero-sum constrained stochastic games with independent state processes," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 62(3), pages 375-386, December.
    13. Dijk, N.M. van, 1989. "A simple performability estimate for Jackson networks with an unreliable output channel," Serie Research Memoranda 0032, VU University Amsterdam, Faculty of Economics, Business Administration and Econometrics.
    14. Daniel F. Silva & Bo Zhang & Hayriye Ayhan, 2018. "Admission control strategies for tandem Markovian loss systems," Queueing Systems: Theory and Applications, Springer, vol. 90(1), pages 35-63, October.
    15. B. Curtis Eaves & Arthur F. Veinott, 2014. "Maximum-Stopping-Value Policies in Finite Markov Population Decision Chains," Mathematics of Operations Research, INFORMS, vol. 39(3), pages 597-606, August.
    16. Richard T. Boylan & Bente Villadsen, "undated". "A Bellman's Equation for the Study of Income Smoothing," Computing in Economics and Finance 1996 _009, Society for Computational Economics.
    17. Vladimir Ejov & Jerzy A. Filar & Michael Haythorpe & Giang T. Nguyen, 2009. "Refined MDP-Based Branch-and-Fix Algorithm for the Hamiltonian Cycle Problem," Mathematics of Operations Research, INFORMS, vol. 34(3), pages 758-768, August.
    18. D. P. de Farias & B. Van Roy, 2003. "The Linear Programming Approach to Approximate Dynamic Programming," Operations Research, INFORMS, vol. 51(6), pages 850-865, December.
    19. Nielsen, Lars Relund & Kristensen, Anders Ringgaard, 2006. "Finding the K best policies in a finite-horizon Markov decision process," European Journal of Operational Research, Elsevier, vol. 175(2), pages 1164-1179, December.
    20. Robert Kirkby Author-Email: robertkirkby@gmail.com|, 2017. "Convergence of Discretized Value Function Iteration," Computational Economics, Springer;Society for Computational Economics, vol. 49(1), pages 117-153, January.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:vua:wpaper:1989-65. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: R. Dam (email available below). General contact details of provider: https://edirc.repec.org/data/fewvunl.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.