IDEAS home Printed from https://ideas.repec.org/a/spr/queues/v109y2025i3d10.1007_s11134-025-09949-y.html
   My bibliography  Save this article

Convergence of Natural Policy Gradient for a family of infinite-state queueing MDPs

Author

Listed:
  • Isaac Grosof

    (Northwestern University)

  • Siva Theja Maguluri

    (Georgia Institute of Technology)

  • R. Srikant

    (University of Illinois, Urbana-Champaign)

Abstract

A wide variety of queueing systems can be naturally modeled as infinite-state Markov Decision Processes (MDPs). In the reinforcement learning (RL) context, a variety of algorithms have been developed to learn and optimize these MDPs. At the heart of many popular policy-gradient-based learning algorithms, such as natural actor-critic, TRPO, and PPO, lies the Natural Policy Gradient (NPG) policy optimization algorithm. Convergence results for these RL algorithms rest on convergence results for the NPG algorithm. However, all existing results on the convergence of the NPG algorithm are limited to finite-state settings. We study a general class of queueing MDPs and prove a $$O(1/\sqrt{T})$$ O ( 1 / T ) convergence rate for the NPG algorithm, if the NPG algorithm is initialized with the MaxWeight policy. This is the first convergence rate bound for the NPG algorithm for a general class of infinite-state average-reward MDPs. Moreover, our result applies to a beyond the queueing setting to any countably infinite MDP satisfying certain mild structural assumptions, given a sufficiently good initial policy. Key to our result are state-dependent bounds on the relative value function achieved by the iterate policies of the NPG algorithm.

Suggested Citation

  • Isaac Grosof & Siva Theja Maguluri & R. Srikant, 2025. "Convergence of Natural Policy Gradient for a family of infinite-state queueing MDPs," Queueing Systems: Theory and Applications, Springer, vol. 109(3), pages 1-40, September.
  • Handle: RePEc:spr:queues:v:109:y:2025:i:3:d:10.1007_s11134-025-09949-y
    DOI: 10.1007/s11134-025-09949-y
    as

    Download full text from publisher

    File URL: http://link.springer.com/10.1007/s11134-025-09949-y
    File Function: Abstract
    Download Restriction: Access to the full text of the articles in this series is restricted.

    File URL: https://libkey.io/10.1007/s11134-025-09949-y?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to

    for a different version of it.

    References listed on IDEAS

    as
    1. Isaac Grosof & Mor Harchol-Balter & Alan Scheller-Wolf, 2022. "WCFS: a new framework for analyzing multiserver systems," Queueing Systems: Theory and Applications, Springer, vol. 102(1), pages 143-174, October.
    2. Eyal Even-Dar & Sham. M. Kakade & Yishay Mansour, 2009. "Online Markov Decision Processes," Mathematics of Operations Research, INFORMS, vol. 34(3), pages 726-736, August.
    3. Sumit Kunnumkal & Huseyin Topaloglu, 2008. "Using Stochastic Approximation Methods to Compute Optimal Base-Stock Levels in Inventory Control Problems," Operations Research, INFORMS, vol. 56(3), pages 646-664, June.
    4. X. R. Cao, 1999. "Single Sample Path-Based Optimization of Markov Chains," Journal of Optimization Theory and Applications, Springer, vol. 100(3), pages 527-548, March.
    5. repec:inm:orstsy:v:12:y:2022:i:1:p:30-67 is not listed on IDEAS
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Cong Shi & Weidong Chen & Izak Duenyas, 2016. "Technical Note—Nonparametric Data-Driven Algorithms for Multiproduct Inventory Systems with Censored Demand," Operations Research, INFORMS, vol. 64(2), pages 362-370, April.
    2. Sumit Kunnumkal & Huseyin Topaloglu, 2009. "A stochastic approximation method for the single-leg revenue management problem with discrete demand distributions," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 70(3), pages 477-504, December.
    3. Gah-Yi Ban, 2020. "Confidence Intervals for Data-Driven Inventory Policies with Demand Censoring," Operations Research, INFORMS, vol. 68(2), pages 309-326, March.
    4. Jason M. Altschuler & Kunal Talwar, 2021. "Online Learning over a Finite Action Set with Limited Switching," Mathematics of Operations Research, INFORMS, vol. 46(1), pages 179-203, February.
    5. Arnoud V. den Boer & Bert Zwart, 2015. "Dynamic Pricing and Learning with Finite Inventories," Operations Research, INFORMS, vol. 63(4), pages 965-978, August.
    6. Guan, Yongpei & Liu, Tieming, 2010. "Stochastic lot-sizing problem with inventory-bounds and constant order-capacities," European Journal of Operational Research, Elsevier, vol. 207(3), pages 1398-1409, December.
    7. Michiel De Muynck & Herwig Bruneel & Sabine Wittevrongel, 2023. "Analysis of a Queue with General Service Demands and Multiple Servers with Variable Service Capacities," Mathematics, MDPI, vol. 11(4), pages 1-21, February.
    8. Omar Besbes & Alp Muharremoglu, 2013. "On Implications of Demand Censoring in the Newsvendor Problem," Management Science, INFORMS, vol. 59(6), pages 1407-1424, June.
    9. Woonghee Tim Huh & Paat Rusmevichientong, 2014. "Online Sequential Optimization with Biased Gradients: Theory and Applications to Censored Demand," INFORMS Journal on Computing, INFORMS, vol. 26(1), pages 150-159, February.
    10. Shiau Hong Lim & Huan Xu & Shie Mannor, 2016. "Reinforcement Learning in Robust Markov Decision Processes," Mathematics of Operations Research, INFORMS, vol. 41(4), pages 1325-1353, November.
    11. Lin An & Andrew A. Li & Benjamin Moseley & R. Ravi, 2023. "The Nonstationary Newsvendor with (and without) Predictions," Papers 2305.07993, arXiv.org, revised Feb 2025.
    12. Bingfeng Bai & Bo Li & Xingzhi Jia, 2025. "The impacts of stockout cost on a stochastic production-inventory system in minimizing total cost conditional value-at-risk," Flexible Services and Manufacturing Journal, Springer, vol. 37(3), pages 943-978, September.
    13. Ren, Ke & Bidkhori, Hoda & Shen, Zuo-Jun Max, 2024. "Data-driven inventory policy: Learning from sequentially observed non-stationary data," Omega, Elsevier, vol. 123(C).
    14. Jiri Chod & Mihalis G. Markakis & Nikolaos Trichakis, 2021. "On the Learning Benefits of Resource Flexibility," Management Science, INFORMS, vol. 67(10), pages 6513-6528, October.
    15. Gah-Yi Ban & Cynthia Rudin, 2019. "The Big Data Newsvendor: Practical Insights from Machine Learning," Operations Research, INFORMS, vol. 67(1), pages 90-108, January.
    16. Dileep Kalathil & Vivek S. Borkar & Rahul Jain, 2017. "Approachability in Stackelberg Stochastic Games with Vector Costs," Dynamic Games and Applications, Springer, vol. 7(3), pages 422-442, September.
    17. Meng Qi & Ho‐Yin Mak & Zuo‐Jun Max Shen, 2020. "Data‐driven research in retail operations—A review," Naval Research Logistics (NRL), John Wiley & Sons, vol. 67(8), pages 595-616, December.
    18. Gah-Yi Ban & Jérémie Gallien & Adam J. Mersereau, 2019. "Dynamic Procurement of New Products with Covariate Information: The Residual Tree Method," Manufacturing & Service Operations Management, INFORMS, vol. 21(4), pages 798-815, October.
    19. Kang Cheng & Kanjian Zhang & Shumin Fei & Haikun Wei, 2016. "Potential-Based Least-Squares Policy Iteration for a Parameterized Feedback Control System," Journal of Optimization Theory and Applications, Springer, vol. 169(2), pages 692-704, May.
    20. Andrew F. Siegel & Michael R. Wagner, 2021. "Profit Estimation Error in the Newsvendor Model Under a Parametric Demand Distribution," Management Science, INFORMS, vol. 67(8), pages 4863-4879, August.

    More about this item

    Keywords

    ;
    ;
    ;

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:queues:v:109:y:2025:i:3:d:10.1007_s11134-025-09949-y. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.