IDEAS home Printed from https://ideas.repec.org/a/gam/jgames/v14y2023i6p75-d1301887.html
   My bibliography  Save this article

Collaborative Cost Multi-Agent Decision-Making Algorithm with Factored-Value Monte Carlo Tree Search and Max-Plus

Author

Listed:
  • Nii-Emil Alexander-Reindorf

    (Department of Computer Science, School of Engineering and Applied Sciences, The University of the District of Columbia, Washington, DC 20008, USA)

  • Paul Cotae

    (Department of Electrical and Computer Engineering, School of Engineering and Applied Sciences, The University of the District of Columbia, Washington, DC 20008, USA)

Abstract

In this paper, we describe the Factored Value MCTS Hybrid Cost-Max-Plus algorithm, a collection of decision-making algorithms (centralized, decentralized, and hybrid) for a multi-agent system in a collaborative setting that considers action costs. Our proposed algorithm is made up of two steps. In the first step, each agent searches for the best individual actions with the lowest cost using the Monte Carlo Tree Search (MCTS) algorithm. Each agent’s most promising activities are chosen and presented to the team. The Hybrid Cost Max-Plus method is utilized for joint action selection in the second step. The Hybrid Cost Max-Plus algorithm improves the well-known centralized and distributed Max-Plus algorithm by incorporating the cost of actions in agent interactions. The Max-Plus algorithm employed the Coordination Graph framework, which exploits agent dependencies to decompose the global payoff function as the sum of local terms. In terms of the number of agents and their interactions, the suggested Factored Value MCTS-Hybrid Cost-Max-Plus method is online, anytime, distributed, and scalable. Our contribution competes with state-of-the-art methodologies and algorithms by leveraging the locality of agent interactions for planning and acting utilizing MCTS and Max-Plus algorithms.

Suggested Citation

  • Nii-Emil Alexander-Reindorf & Paul Cotae, 2023. "Collaborative Cost Multi-Agent Decision-Making Algorithm with Factored-Value Monte Carlo Tree Search and Max-Plus," Games, MDPI, vol. 14(6), pages 1-20, December.
  • Handle: RePEc:gam:jgames:v:14:y:2023:i:6:p:75-:d:1301887
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2073-4336/14/6/75/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2073-4336/14/6/75/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Daniel S. Bernstein & Robert Givan & Neil Immerman & Shlomo Zilberstein, 2002. "The Complexity of Decentralized Control of Markov Decision Processes," Mathematics of Operations Research, INFORMS, vol. 27(4), pages 819-840, November.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Yanling Chang & Alan Erera & Chelsea White, 2015. "Value of information for a leader–follower partially observed Markov game," Annals of Operations Research, Springer, vol. 235(1), pages 129-153, December.
    2. Corine M. Laan & Ana Isabel Barros & Richard J. Boucherie & Herman Monsuur & Judith Timmer, 2019. "Solving partially observable agent‐intruder games with an application to border security problems," Naval Research Logistics (NRL), John Wiley & Sons, vol. 66(2), pages 174-190, March.
    3. Yanling Chang & Alan Erera & Chelsea White, 2015. "A leader–follower partially observed, multiobjective Markov game," Annals of Operations Research, Springer, vol. 235(1), pages 103-128, December.
    4. Lee, Hyun-Rok & Lee, Taesik, 2021. "Multi-agent reinforcement learning algorithm to solve a partially-observable multi-agent problem in disaster response," European Journal of Operational Research, Elsevier, vol. 291(1), pages 296-308.
    5. Yujia Ge & Yurong Nan & Xianhai Guo, 2021. "Maximizing network throughput by cooperative reinforcement learning in clustered solar-powered wireless sensor networks," International Journal of Distributed Sensor Networks, , vol. 17(4), pages 15501477211, April.
    6. Allan M. C. Bretas & Alexandre Mendes & Martin Jackson & Riley Clement & Claudio Sanhueza & Stephan Chalup, 2023. "A decentralised multi-agent system for rail freight traffic management," Annals of Operations Research, Springer, vol. 320(2), pages 631-661, January.
    7. Liangyi Pu & Song Wang & Xiaodong Huang & Xing Liu & Yawei Shi & Huiwei Wang, 2022. "Peer-to-Peer Trading for Energy-Saving Based on Reinforcement Learning," Energies, MDPI, vol. 15(24), pages 1-16, December.
    8. Yan Xia & Rajan Batta & Rakesh Nagi, 2017. "Controlling a Fleet of Unmanned Aerial Vehicles to Collect Uncertain Information in a Threat Environment," Operations Research, INFORMS, vol. 65(3), pages 674-692, June.
    9. Andriotis, C.P. & Papakonstantinou, K.G., 2021. "Deep reinforcement learning driven inspection and maintenance planning under incomplete information and constraints," Reliability Engineering and System Safety, Elsevier, vol. 212(C).
    10. Guo, Xianping & Ye, Liuer & Yin, George, 2012. "A mean–variance optimization problem for discounted Markov decision processes," European Journal of Operational Research, Elsevier, vol. 220(2), pages 423-429.
    11. Olivier Tsemogne & Yezekael Hayel & Charles Kamhoua & Gabriel Deugoue, 2022. "A Partially Observable Stochastic Zero-sum Game for a Network Epidemic Control Problem," Dynamic Games and Applications, Springer, vol. 12(1), pages 82-109, March.
    12. Weichao Mao & Tamer Başar, 2023. "Provably Efficient Reinforcement Learning in Decentralized General-Sum Markov Games," Dynamic Games and Applications, Springer, vol. 13(1), pages 165-186, March.
    13. Louis Anthony Cox, 2020. "Answerable and Unanswerable Questions in Risk Analysis with Open‐World Novelty," Risk Analysis, John Wiley & Sons, vol. 40(S1), pages 2144-2177, November.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jgames:v:14:y:2023:i:6:p:75-:d:1301887. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.