IDEAS home Printed from https://ideas.repec.org/a/plo/pone00/0222215.html
   My bibliography  Save this article

Multi-agent reinforcement learning with approximate model learning for competitive games

Author

Listed:
  • Young Joon Park
  • Yoon Sang Cho
  • Seoung Bum Kim

Abstract

We propose a method for learning multi-agent policies to compete against multiple opponents. The method consists of recurrent neural network-based actor-critic networks and deterministic policy gradients that promote cooperation between agents by communication. The learning process does not require access to opponents’ parameters or observations because the agents are trained separately from the opponents. The actor networks enable the agents to communicate using forward and backward paths while the critic network helps to train the actors by delivering them gradient signals based on their contribution to the global reward. Moreover, to address nonstationarity due to the evolving of other agents, we propose approximate model learning using auxiliary prediction networks for modeling the state transitions, reward function, and opponent behavior. In the test phase, we use competitive multi-agent environments to demonstrate by comparison the usefulness and superiority of the proposed method in terms of learning efficiency and goal achievements. The comparison results show that the proposed method outperforms the alternatives.

Suggested Citation

  • Young Joon Park & Yoon Sang Cho & Seoung Bum Kim, 2019. "Multi-agent reinforcement learning with approximate model learning for competitive games," PLOS ONE, Public Library of Science, vol. 14(9), pages 1-20, September.
  • Handle: RePEc:plo:pone00:0222215
    DOI: 10.1371/journal.pone.0222215
    as

    Download full text from publisher

    File URL: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0222215
    Download Restriction: no

    File URL: https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0222215&type=printable
    Download Restriction: no

    File URL: https://libkey.io/10.1371/journal.pone.0222215?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. David Silver & Julian Schrittwieser & Karen Simonyan & Ioannis Antonoglou & Aja Huang & Arthur Guez & Thomas Hubert & Lucas Baker & Matthew Lai & Adrian Bolton & Yutian Chen & Timothy Lillicrap & Fan , 2017. "Mastering the game of Go without human knowledge," Nature, Nature, vol. 550(7676), pages 354-359, October.
    2. Ardi Tampuu & Tambet Matiisen & Dorian Kodelja & Ilya Kuzovkin & Kristjan Korjus & Juhan Aru & Jaan Aru & Raul Vicente, 2017. "Multiagent cooperation and competition with deep reinforcement learning," PLOS ONE, Public Library of Science, vol. 12(4), pages 1-15, April.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Chen Gao & Xiaochong Lan & Nian Li & Yuan Yuan & Jingtao Ding & Zhilun Zhou & Fengli Xu & Yong Li, 2024. "Large language models empowered agent-based modeling and simulation: a survey and perspectives," Palgrave Communications, Palgrave Macmillan, vol. 11(1), pages 1-24, December.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Qingyan Li & Tao Lin & Qianyi Yu & Hui Du & Jun Li & Xiyue Fu, 2023. "Review of Deep Reinforcement Learning and Its Application in Modern Renewable Power System Control," Energies, MDPI, vol. 16(10), pages 1-23, May.
    2. Perera, A.T.D. & Kamalaruban, Parameswaran, 2021. "Applications of reinforcement learning in energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 137(C).
    3. Daníelsson, Jón & Macrae, Robert & Uthemann, Andreas, 2022. "Artificial intelligence and systemic risk," Journal of Banking & Finance, Elsevier, vol. 140(C).
    4. Zhang, Xi & Wang, Qin & Bi, Xiaowen & Li, Donghong & Liu, Dong & Yu, Yuanjin & Tse, Chi Kong, 2024. "Mitigating cascading failure in power grids with deep reinforcement learning-based remedial actions," Reliability Engineering and System Safety, Elsevier, vol. 250(C).
    5. Emilio Calvano & Giacomo Calzolari & Vincenzo Denicolò & Sergio Pastorello, 2019. "Algorithmic Pricing What Implications for Competition Policy?," Review of Industrial Organization, Springer;The Industrial Organization Society, vol. 55(1), pages 155-171, August.
    6. Adnan Jafar & Alessandra Kobayati & Michael A. Tsoukas & Ahmad Haidar, 2024. "Personalized insulin dosing using reinforcement learning for high-fat meals and aerobic exercises in type 1 diabetes: a proof-of-concept trial," Nature Communications, Nature, vol. 15(1), pages 1-12, December.
    7. Yang, Zhengzhi & Zheng, Lei & Perc, Matjaž & Li, Yumeng, 2024. "Interaction state Q-learning promotes cooperation in the spatial prisoner's dilemma game," Applied Mathematics and Computation, Elsevier, vol. 463(C).
    8. Artur Kwasek & Maria Kocot & Izabela Gontarek & Igor Protasowicki & Bartosz Blaszczak, 2024. "Negative Faces of Artificial Intelligence in the Conditions of the Knowledge-Based Economy," European Research Studies Journal, European Research Studies Journal, vol. 0(2), pages 465-477.
    9. Chung-Yuan Chang & Yen-Wei Feng & Tejender Singh Rawat & Shih-Wei Chen & Albert Shihchun Lin, 2025. "Optimization of laser annealing parameters based on bayesian reinforcement learning," Journal of Intelligent Manufacturing, Springer, vol. 36(4), pages 2479-2492, April.
    10. Zhang, Yihao & Chai, Zhaojie & Lykotrafitis, George, 2021. "Deep reinforcement learning with a particle dynamics environment applied to emergency evacuation of a room with obstacles," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 571(C).
    11. Keller, Alexander & Dahm, Ken, 2019. "Integral equations and machine learning," Mathematics and Computers in Simulation (MATCOM), Elsevier, vol. 161(C), pages 2-12.
    12. Canhoto, Ana Isabel & Clear, Fintan, 2020. "Artificial intelligence and machine learning as business tools: A framework for diagnosing value destruction potential," Business Horizons, Elsevier, vol. 63(2), pages 183-193.
    13. Zhaobin Mo & Xuan Di & Rongye Shi, 2023. "Robust Data Sampling in Machine Learning: A Game-Theoretic Framework for Training and Validation Data Selection," Games, MDPI, vol. 14(1), pages 1-13, January.
    14. Yang, Kaiyuan & Huang, Houjing & Vandans, Olafs & Murali, Adithya & Tian, Fujia & Yap, Roland H.C. & Dai, Liang, 2023. "Applying deep reinforcement learning to the HP model for protein structure prediction," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 609(C).
    15. Yifeng Guo & Xingyu Fu & Yuyan Shi & Mingwen Liu, 2018. "Robust Log-Optimal Strategy with Reinforcement Learning," Papers 1805.00205, arXiv.org.
    16. Xueqing Yan & Yongming Li, 2023. "A Novel Discrete Differential Evolution with Varying Variables for the Deficiency Number of Mahjong Hand," Mathematics, MDPI, vol. 11(9), pages 1-21, May.
    17. José A. Torres-León & Marco A. Moreno-Armendáriz & Hiram Calvo, 2024. "Representing the Information of Multiplayer Online Battle Arena (MOBA) Video Games Using Convolutional Accordion Auto-Encoder (A 2 E) Enhanced by Attention Mechanisms," Mathematics, MDPI, vol. 12(17), pages 1-19, September.
    18. Jianjun Chen & Weihao Hu & Di Cao & Bin Zhang & Qi Huang & Zhe Chen & Frede Blaabjerg, 2019. "An Imbalance Fault Detection Algorithm for Variable-Speed Wind Turbines: A Deep Learning Approach," Energies, MDPI, vol. 12(14), pages 1-15, July.
    19. Andrew G. Haldane & Arthur E. Turrell, 2019. "Drawing on different disciplines: macroeconomic agent-based models," Journal of Evolutionary Economics, Springer, vol. 29(1), pages 39-66, March.
    20. Antonopoulos, Ioannis & Robu, Valentin & Couraud, Benoit & Kirli, Desen & Norbu, Sonam & Kiprakis, Aristides & Flynn, David & Elizondo-Gonzalez, Sergio & Wattam, Steve, 2020. "Artificial intelligence and machine learning approaches to energy demand-side response: A systematic review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 130(C).

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pone00:0222215. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: plosone (email available below). General contact details of provider: https://journals.plos.org/plosone/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.