IDEAS home Printed from https://ideas.repec.org/a/plo/pone00/0172395.html
   My bibliography  Save this article

Multiagent cooperation and competition with deep reinforcement learning

Author

Listed:
  • Ardi Tampuu
  • Tambet Matiisen
  • Dorian Kodelja
  • Ilya Kuzovkin
  • Kristjan Korjus
  • Juhan Aru
  • Jaan Aru
  • Raul Vicente

Abstract

Evolution of cooperation and competition can appear when multiple adaptive agents share a biological, social, or technological niche. In the present work we study how cooperation and competition emerge between autonomous agents that learn by reinforcement while using only their raw visual input as the state representation. In particular, we extend the Deep Q-Learning framework to multiagent environments to investigate the interaction between two learning agents in the well-known video game Pong. By manipulating the classical rewarding scheme of Pong we show how competitive and collaborative behaviors emerge. We also describe the progression from competitive to collaborative behavior when the incentive to cooperate is increased. Finally we show how learning by playing against another adaptive agent, instead of against a hard-wired algorithm, results in more robust strategies. The present work shows that Deep Q-Networks can become a useful tool for studying decentralized learning of multiagent systems coping with high-dimensional environments.

Suggested Citation

  • Ardi Tampuu & Tambet Matiisen & Dorian Kodelja & Ilya Kuzovkin & Kristjan Korjus & Juhan Aru & Jaan Aru & Raul Vicente, 2017. "Multiagent cooperation and competition with deep reinforcement learning," PLOS ONE, Public Library of Science, vol. 12(4), pages 1-15, April.
  • Handle: RePEc:plo:pone00:0172395
    DOI: 10.1371/journal.pone.0172395
    as

    Download full text from publisher

    File URL: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0172395
    Download Restriction: no

    File URL: https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0172395&type=printable
    Download Restriction: no

    File URL: https://libkey.io/10.1371/journal.pone.0172395?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Qingyan Li & Tao Lin & Qianyi Yu & Hui Du & Jun Li & Xiyue Fu, 2023. "Review of Deep Reinforcement Learning and Its Application in Modern Renewable Power System Control," Energies, MDPI, vol. 16(10), pages 1-23, May.
    2. Li, Xingyu & Epureanu, Bogdan I., 2020. "AI-based competition of autonomous vehicle fleets with application to fleet modularity," European Journal of Operational Research, Elsevier, vol. 287(3), pages 856-874.
    3. Harrold, Daniel J.B. & Cao, Jun & Fan, Zhong, 2022. "Renewable energy integration and microgrid energy trading using multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 318(C).
    4. Se-Heon Lim & Sung-Guk Yoon, 2022. "Dynamic DNR and Solar PV Smart Inverter Control Scheme Using Heterogeneous Multi-Agent Deep Reinforcement Learning," Energies, MDPI, vol. 15(23), pages 1-18, December.
    5. Wang, Xuekai & D’Ariano, Andrea & Su, Shuai & Tang, Tao, 2023. "Cooperative train control during the power supply shortage in metro system: A multi-agent reinforcement learning approach," Transportation Research Part B: Methodological, Elsevier, vol. 170(C), pages 244-278.
    6. Tianhao Wang & Shiqian Ma & Na Xu & Tianchun Xiang & Xiaoyun Han & Chaoxu Mu & Yao Jin, 2022. "Secondary Voltage Collaborative Control of Distributed Energy System via Multi-Agent Reinforcement Learning," Energies, MDPI, vol. 15(19), pages 1-12, September.
    7. Lee, Hyun-Rok & Lee, Taesik, 2021. "Multi-agent reinforcement learning algorithm to solve a partially-observable multi-agent problem in disaster response," European Journal of Operational Research, Elsevier, vol. 291(1), pages 296-308.
    8. Aymanns, Christoph & Foerster, Jakob & Georg, Co-Pierre & Weber, Matthias, 2022. "Fake News in Social Networks," SocArXiv y4mkd, Center for Open Science.
    9. Emilio Calvano & Giacomo Calzolari & Vincenzo Denicolò & Sergio Pastorello, 2019. "Algorithmic Pricing What Implications for Competition Policy?," Review of Industrial Organization, Springer;The Industrial Organization Society, vol. 55(1), pages 155-171, August.
    10. Young Joon Park & Yoon Sang Cho & Seoung Bum Kim, 2019. "Multi-agent reinforcement learning with approximate model learning for competitive games," PLOS ONE, Public Library of Science, vol. 14(9), pages 1-20, September.
    11. Marilleau, Nicolas & Lang, Christophe & Giraudoux, Patrick, 2018. "Coupling agent-based with equation-based models to study spatially explicit megapopulation dynamics," Ecological Modelling, Elsevier, vol. 384(C), pages 34-42.
    12. Perera, A.T.D. & Kamalaruban, Parameswaran, 2021. "Applications of reinforcement learning in energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 137(C).

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pone00:0172395. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: plosone (email available below). General contact details of provider: https://journals.plos.org/plosone/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.