IDEAS home Printed from https://ideas.repec.org/a/spr/annopr/v208y2013i1p321-33610.1007-s10479-012-1064-y.html
   My bibliography  Save this article

Adaptive aggregation for reinforcement learning in average reward Markov decision processes

Author

Listed:
  • Ronald Ortner

Abstract

We present an algorithm which aggregates online when learning to behave optimally in an average reward Markov decision process. The algorithm is based on the reinforcement learning algorithm UCRL and uses confidence intervals for aggregating the state space. We derive bounds on the regret our algorithm suffers with respect to an optimal policy. These bounds are only slightly worse than the original bounds for UCRL. Copyright Springer Science+Business Media, LLC 2013

Suggested Citation

  • Ronald Ortner, 2013. "Adaptive aggregation for reinforcement learning in average reward Markov decision processes," Annals of Operations Research, Springer, vol. 208(1), pages 321-336, September.
  • Handle: RePEc:spr:annopr:v:208:y:2013:i:1:p:321-336:10.1007/s10479-012-1064-y
    DOI: 10.1007/s10479-012-1064-y
    as

    Download full text from publisher

    File URL: http://hdl.handle.net/10.1007/s10479-012-1064-y
    Download Restriction: Access to full text is restricted to subscribers.

    File URL: https://libkey.io/10.1007/s10479-012-1064-y?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Hyeong Soo Chang & Michael C. Fu & Jiaqiao Hu & Steven I. Marcus, 2005. "An Adaptive Sampling Algorithm for Solving Markov Decision Processes," Operations Research, INFORMS, vol. 53(1), pages 126-139, February.
    2. Apostolos N. Burnetas & Michael N. Katehakis, 1997. "Optimal Adaptive Policies for Markov Decision Processes," Mathematics of Operations Research, INFORMS, vol. 22(1), pages 222-255, February.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Arnoud V. den Boer & Bert Zwart, 2015. "Dynamic Pricing and Learning with Finite Inventories," Operations Research, INFORMS, vol. 63(4), pages 965-978, August.
    2. Kartikeya Puranam & Michael Katehakis, 2014. "On optimal bidding and inventory control in sequential procurement auctions: the multi period case," Annals of Operations Research, Springer, vol. 217(1), pages 447-462, June.
    3. Athanassios N. Avramidis & Arnoud V. Boer, 2021. "Dynamic pricing with finite price sets: a non-parametric approach," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 94(1), pages 1-34, August.
    4. Savas Dayanik & Warren Powell & Kazutoshi Yamazaki, 2013. "Asymptotically optimal Bayesian sequential change detection and identification rules," Annals of Operations Research, Springer, vol. 208(1), pages 337-370, September.
    5. William L. Cooper & Bharath Rangarajan, 2012. "Performance Guarantees for Empirical Markov Decision Processes with Applications to Multiperiod Inventory Models," Operations Research, INFORMS, vol. 60(5), pages 1267-1281, October.
    6. Daniel R. Jiang & Lina Al-Kanj & Warren B. Powell, 2020. "Optimistic Monte Carlo Tree Search with Sampled Information Relaxation Dual Bounds," Operations Research, INFORMS, vol. 68(6), pages 1678-1697, November.
    7. Rishabh Gupta & Qi Zhang, 2022. "Decomposition and Adaptive Sampling for Data-Driven Inverse Linear Optimization," INFORMS Journal on Computing, INFORMS, vol. 34(5), pages 2720-2735, September.
    8. Michael C. Fu, 2019. "Simulation-Based Algorithms for Markov Decision Processes: Monte Carlo Tree Search from AlphaGo to AlphaZero," Asia-Pacific Journal of Operational Research (APJOR), World Scientific Publishing Co. Pte. Ltd., vol. 36(06), pages 1-25, December.
    9. Katehakis, Michael N. & Puranam, Kartikeya S., 2012. "On bidding for a fixed number of items in a sequence of auctions," European Journal of Operational Research, Elsevier, vol. 222(1), pages 76-84.
    10. Inchi Hu & Chi-Wen Jevons Lee, 2003. "Bayesian Adaptive Stochastic Process Termination," Mathematics of Operations Research, INFORMS, vol. 28(2), pages 361-381, May.
    11. Agbo, Maxime, 2015. "A perpetual search for talents across overlapping generations: A learning process," Mathematical Social Sciences, Elsevier, vol. 76(C), pages 131-145.
    12. Mohammed Shahid Abdulla & Shalabh Bhatnagar, 2016. "Multi-armed bandits based on a variant of Simulated Annealing," Indian Journal of Pure and Applied Mathematics, Springer, vol. 47(2), pages 195-212, June.
    13. Oleg Szehr, 2021. "Hedging of Financial Derivative Contracts via Monte Carlo Tree Search," Papers 2102.06274, arXiv.org, revised Apr 2021.
    14. Woonghee Tim Huh & Paat Rusmevichientong, 2014. "Online Sequential Optimization with Biased Gradients: Theory and Applications to Censored Demand," INFORMS Journal on Computing, INFORMS, vol. 26(1), pages 150-159, February.
    15. Satya S. Malladi & Alan L. Erera & Chelsea C. White, 2021. "Managing mobile production-inventory systems influenced by a modulation process," Annals of Operations Research, Springer, vol. 304(1), pages 299-330, September.
    16. Savas Dayanik & Warren B. Powell & Kazutoshi Yamazaki, 2013. "Asymptotically optimal Bayesian sequential change detection and identification rules," Annals of Operations Research, Springer, vol. 208(1), pages 337-370, September.
    17. Apostolos Burnetas, 2022. "Learning and data-driven optimization in queues with strategic customers," Queueing Systems: Theory and Applications, Springer, vol. 100(3), pages 517-519, April.
    18. He Huang & DaPeng Liang & Liang Liang & Zhen Tong, 2019. "Research on China’s Power Sustainable Transition Under Progressively Levelized Power Generation Cost Based on a Dynamic Integrated Generation–Transmission Planning Model," Sustainability, MDPI, vol. 11(8), pages 1-21, April.
    19. Warren B. Powell, 2016. "Perspectives of approximate dynamic programming," Annals of Operations Research, Springer, vol. 241(1), pages 319-356, June.
    20. Felipe Caro & Aparupa Das Gupta, 2022. "Robust control of the multi-armed bandit problem," Annals of Operations Research, Springer, vol. 317(2), pages 461-480, October.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:annopr:v:208:y:2013:i:1:p:321-336:10.1007/s10479-012-1064-y. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.