Author
Listed:
- Wang, Xianjia
- Yang, Zhipeng
- Chen, Guici
- Liu, Yanli
Abstract
In evolutionary game theory, the emergence and maintenance of cooperative behavior in a population often face challenges posed by the temptation of free-riding behavior, which offers high individual payoff. Recently, apart from a range of mechanisms that promote the formation of cooperation, individual learning abilities under limited information have emerged as a key factor in adjusting agents' strategies. This paper introduces q-learning and particle swarm optimization into the realm of evolutionary dynamics. The primary focus is on investigating the impact of Exploration-based Particle Swarm Optimization (EPSO) and Q-learning-based Particle Swarm Optimization (QPSO) on the evolution of cooperation in a continuous version of the spatial public goods game (SPGG) with punishment. EPSO defines a rule for updating agents' strategies based on individual and limited population information. It also integrates an exploration mechanism to increase the diversity and directionality of the strategies. Additionally, QPSO serves to adaptively optimize the parameters of EPSO, addressing the issue of parameter control limiting the EPSO's performance. Leveraging experiential learning and iterative adjustment, QPSO progressively refines system parameters, thus rationally assimilating knowledge and updating individual strategies to attain optimal payoff. Through extensive simulation studies, it has been observed that employing QPSO's adaptively optimized parameters in EPSO significantly promotes the cooperative evolution in the SPGG with punishment. Furthermore, individual learning coefficients, when too low or too high, both facilitate the occurrence of cooperation. Simultaneously, higher inertia weight coefficients strengthen the system's cooperation level, while lower punishment intensity coefficients and higher gain intensity coefficients effectively promote the cooperation emergence and exert a significant influence on the overall cooperation level of the system. This research provides a new perspective for designing real-world schemes that encourage cooperation and offers insights into the intricate dynamics of cooperation in complex systems.
Suggested Citation
Wang, Xianjia & Yang, Zhipeng & Chen, Guici & Liu, Yanli, 2024.
"Enhancing cooperative evolution in spatial public goods game by particle swarm optimization based on exploration and q-learning,"
Applied Mathematics and Computation, Elsevier, vol. 469(C).
Handle:
RePEc:eee:apmaco:v:469:y:2024:i:c:s0096300324000067
DOI: 10.1016/j.amc.2024.128534
Download full text from publisher
As the access to this document is restricted, you may want to search for a different version of it.
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:apmaco:v:469:y:2024:i:c:s0096300324000067. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: https://www.journals.elsevier.com/applied-mathematics-and-computation .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.