IDEAS home Printed from https://ideas.repec.org/a/spr/dyngam/v13y2023i1d10.1007_s13235-022-00451-1.html
   My bibliography  Save this article

Asymptotic Optimality for Decentralised Bandits

Author

Listed:
  • Conor J. Newton

    (University of Bristol)

  • Ayalvadi Ganesh

    (University of Bristol)

  • Henry W. J. Reeve

    (University of Bristol)

Abstract

We consider a large number of agents collaborating on a multi-armed bandit problem with a large number of arms. The goal is to minimise the regret of each agent in a communication-constrained setting. We present a decentralised algorithm which builds upon and improves the Gossip-Insert-Eliminate method of Chawla et al. (International conference on artificial intelligence and statistics, pp 3471–3481, 2020). We provide a theoretical analysis of the regret incurred which shows that our algorithm is asymptotically optimal. In fact, our regret guarantee matches the asymptotically optimal rate achievable in the full communication setting. Finally, we present empirical results which support our conclusions.

Suggested Citation

  • Conor J. Newton & Ayalvadi Ganesh & Henry W. J. Reeve, 2023. "Asymptotic Optimality for Decentralised Bandits," Dynamic Games and Applications, Springer, vol. 13(1), pages 307-325, March.
  • Handle: RePEc:spr:dyngam:v:13:y:2023:i:1:d:10.1007_s13235-022-00451-1
    DOI: 10.1007/s13235-022-00451-1
    as

    Download full text from publisher

    File URL: http://link.springer.com/10.1007/s13235-022-00451-1
    File Function: Abstract
    Download Restriction: Access to the full text of the articles in this series is restricted.

    File URL: https://libkey.io/10.1007/s13235-022-00451-1?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Sumitra Ganesh & Nelson Vadori & Mengda Xu & Hua Zheng & Prashant Reddy & Manuela Veloso, 2019. "Reinforcement Learning for Market Making in a Multi-agent Dealer Market," Papers 1911.05892, arXiv.org.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Arthur Charpentier & Romuald Élie & Carl Remlinger, 2023. "Reinforcement Learning in Economics and Finance," Computational Economics, Springer;Society for Computational Economics, vol. 62(1), pages 425-462, June.
    2. Ben Hambly & Renyuan Xu & Huining Yang, 2021. "Recent Advances in Reinforcement Learning in Finance," Papers 2112.04553, arXiv.org, revised Feb 2023.
    3. Aaron Wheeler & Jeffrey D. Varner, 2023. "Scalable Agent-Based Modeling for Complex Financial Market Simulations," Papers 2312.14903, arXiv.org, revised Jan 2024.
    4. Ali Raheman & Anton Kolonin & Alexey Glushchenko & Arseniy Fokin & Ikram Ansari, 2022. "Adaptive Multi-Strategy Market-Making Agent For Volatile Markets," Papers 2204.13265, arXiv.org.
    5. Bruno Gav{s}perov & Zvonko Kostanjv{c}ar, 2022. "Deep Reinforcement Learning for Market Making Under a Hawkes Process-Based Limit Order Book Model," Papers 2207.09951, arXiv.org.
    6. Johann Lussange & Boris Gutkin, 2023. "Order book regulatory impact on stock market quality: a multi-agent reinforcement learning perspective," Papers 2302.04184, arXiv.org.
    7. Pankaj Kumar, 2021. "Deep Hawkes Process for High-Frequency Market Making," Papers 2109.15110, arXiv.org.
    8. Nelson Vadori & Leo Ardon & Sumitra Ganesh & Thomas Spooner & Selim Amrouni & Jared Vann & Mengda Xu & Zeyu Zheng & Tucker Balch & Manuela Veloso, 2022. "Towards Multi-Agent Reinforcement Learning driven Over-The-Counter Market Simulations," Papers 2210.07184, arXiv.org, revised Aug 2023.
    9. Arthur Charpentier & Romuald Elie & Carl Remlinger, 2020. "Reinforcement Learning in Economics and Finance," Papers 2003.10014, arXiv.org.
    10. Joseph Jerome & Leandro Sanchez-Betancourt & Rahul Savani & Martin Herdegen, 2022. "Model-based gym environments for limit order book trading," Papers 2209.07823, arXiv.org.
    11. Ben Hambly & Renyuan Xu & Huining Yang, 2023. "Recent advances in reinforcement learning in finance," Mathematical Finance, Wiley Blackwell, vol. 33(3), pages 437-503, July.
    12. Adam Bouland & Wim van Dam & Hamed Joorati & Iordanis Kerenidis & Anupam Prakash, 2020. "Prospects and challenges of quantum finance," Papers 2011.06492, arXiv.org.
    13. Bruno Gašperov & Stjepan Begušić & Petra Posedel Šimović & Zvonko Kostanjčar, 2021. "Reinforcement Learning Approaches to Optimal Market Making," Mathematics, MDPI, vol. 9(21), pages 1-22, October.
    14. Isao Yagi & Yuji Masuda & Takanobu Mizuta, 2020. "Analysis of the Impact of High-Frequency Trading on Artificial Market Liquidity," Papers 2010.13038, arXiv.org.
    15. Bingyan Han, 2022. "Can maker-taker fees prevent algorithmic cooperation in market making?," Papers 2211.00496, arXiv.org.
    16. Leo Ardon & Nelson Vadori & Thomas Spooner & Mengda Xu & Jared Vann & Sumitra Ganesh, 2021. "Towards a fully RL-based Market Simulator," Papers 2110.06829, arXiv.org, revised Nov 2021.
    17. Michael Karpe, 2020. "An overall view of key problems in algorithmic trading and recent progress," Papers 2006.05515, arXiv.org.
    18. Johann Lussange & Stefano Vrizzi & Sacha Bourgeois-Gironde & Stefano Palminteri & Boris Gutkin, 2023. "Stock Price Formation: Precepts from a Multi-Agent Reinforcement Learning Model," Computational Economics, Springer;Society for Computational Economics, vol. 61(4), pages 1523-1544, April.
    19. Johann Lussange & Ivan Lazarevich & Sacha Bourgeois-Gironde & Stefano Palminteri & Boris Gutkin, 2021. "Modelling Stock Markets by Multi-agent Reinforcement Learning," Computational Economics, Springer;Society for Computational Economics, vol. 57(1), pages 113-147, January.
    20. Nelson Vadori & Sumitra Ganesh & Prashant Reddy & Manuela Veloso, 2020. "Risk-Sensitive Reinforcement Learning: a Martingale Approach to Reward Uncertainty," Papers 2006.12686, arXiv.org, revised Sep 2020.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:dyngam:v:13:y:2023:i:1:d:10.1007_s13235-022-00451-1. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.