IDEAS home Printed from https://ideas.repec.org/a/eee/chsofr/v192y2025ics0960077925000463.html
   My bibliography  Save this article

Proximal policy optimization approach to stabilize the chaotic food web system

Author

Listed:
  • Xu, Liang
  • Ma, Ru-Ru
  • Wu, Jie
  • Rao, Pengchun

Abstract

Chaos phenomena can be observed extensively in many real-world scenarios, which usually presents a challenge to suppress those undesired behaviors. Unlike the traditional linear and nonlinear control methods, this study introduces a deep reinforcement learning (DRL)-based scheme to regulate chaotic food web system (FWS). Specifically, we utilize the proximal policy optimization (PPO) algorithm to train the agent model, which does not necessitate the prior knowledge of chaotic FWS. Experimental results demonstrate that the developed DRL-based control scheme can effectively guide the FWS toward a predetermined stable state. Furthermore, this investigation considers the influence of environmental noise on the chaotic FWS, and we obtain the important result that incorporating noise during the training process can enhance the controller’s robustness and system adaptability.

Suggested Citation

  • Xu, Liang & Ma, Ru-Ru & Wu, Jie & Rao, Pengchun, 2025. "Proximal policy optimization approach to stabilize the chaotic food web system," Chaos, Solitons & Fractals, Elsevier, vol. 192(C).
  • Handle: RePEc:eee:chsofr:v:192:y:2025:i:c:s0960077925000463
    DOI: 10.1016/j.chaos.2025.116033
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0960077925000463
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.chaos.2025.116033?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Wu, Jie & Xu, Wei & Wang, Xiaofeng & Ma, Ru-ru, 2021. "Stochastic adaptive fixed-time stabilization of chaotic systems with applications in PMSM and FWS," Chaos, Solitons & Fractals, Elsevier, vol. 153(P2).
    2. Chen, Wei-Ching, 2008. "Nonlinear dynamics and chaos in a fractional-order financial system," Chaos, Solitons & Fractals, Elsevier, vol. 36(5), pages 1305-1314.
    3. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    4. Cheng, Haoxin & Li, Haihong & Dai, Qionglin & Yang, Junzhong, 2023. "A deep reinforcement learning method to control chaos synchronization between two identical chaotic systems," Chaos, Solitons & Fractals, Elsevier, vol. 174(C).
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Ding, Jianpeng & Lei, Youming & Xie, Jianfei & Small, Michael, 2024. "Chaos synchronization of two coupled map lattice systems using safe reinforcement learning," Chaos, Solitons & Fractals, Elsevier, vol. 186(C).
    2. Hongxin Yu & Lihui Zhang & Meng Zhang & Fengyue Jin & Yibing Wang, 2024. "Coordinated Ramp Metering Considering the Dynamics of Mixed-Autonomy Traffic," Sustainability, MDPI, vol. 16(22), pages 1-26, November.
    3. Ren, Jinfu & Liu, Yang & Liu, Jiming, 2023. "Chaotic behavior learning via information tracking," Chaos, Solitons & Fractals, Elsevier, vol. 175(P1).
    4. Tulika Saha & Sriparna Saha & Pushpak Bhattacharyya, 2020. "Towards sentiment aided dialogue policy learning for multi-intent conversations using hierarchical reinforcement learning," PLOS ONE, Public Library of Science, vol. 15(7), pages 1-28, July.
    5. Wang, Lei & Chen, Yi-Ming, 2020. "Shifted-Chebyshev-polynomial-based numerical algorithm for fractional order polymer visco-elastic rotating beam," Chaos, Solitons & Fractals, Elsevier, vol. 132(C).
    6. Mahmoud Mahfouz & Angelos Filos & Cyrine Chtourou & Joshua Lockhart & Samuel Assefa & Manuela Veloso & Danilo Mandic & Tucker Balch, 2019. "On the Importance of Opponent Modeling in Auction Markets," Papers 1911.12816, arXiv.org.
    7. Lixiang Zhang & Yan Yan & Yaoguang Hu, 2024. "Deep reinforcement learning for dynamic scheduling of energy-efficient automated guided vehicles," Journal of Intelligent Manufacturing, Springer, vol. 35(8), pages 3875-3888, December.
    8. Imen Azzouz & Wiem Fekih Hassen, 2023. "Optimization of Electric Vehicles Charging Scheduling Based on Deep Reinforcement Learning: A Decentralized Approach," Energies, MDPI, vol. 16(24), pages 1-18, December.
    9. Benjamin Heinbach & Peter Burggräf & Johannes Wagner, 2024. "gym-flp: A Python Package for Training Reinforcement Learning Algorithms on Facility Layout Problems," SN Operations Research Forum, Springer, vol. 5(1), pages 1-26, March.
    10. Jacob W. Crandall & Mayada Oudah & Tennom & Fatimah Ishowo-Oloko & Sherief Abdallah & Jean-François Bonnefon & Manuel Cebrian & Azim Shariff & Michael A. Goodrich & Iyad Rahwan, 2018. "Cooperating with machines," Nature Communications, Nature, vol. 9(1), pages 1-12, December.
      • Abdallah, Sherief & Bonnefon, Jean-François & Cebrian, Manuel & Crandall, Jacob W. & Ishowo-Oloko, Fatimah & Oudah, Mayada & Rahwan, Iyad & Shariff, Azim & Tennom,, 2017. "Cooperating with Machines," TSE Working Papers 17-806, Toulouse School of Economics (TSE).
      • Abdallah, Sherief & Bonnefon, Jean-François & Cebrian, Manuel & Crandall, Jacob W. & Ishowo-Oloko, Fatimah & Oudah, Mayada & Rahwan, Iyad & Shariff, Azim & Tennom,, 2017. "Cooperating with Machines," IAST Working Papers 17-68, Institute for Advanced Study in Toulouse (IAST).
    11. Sun, Alexander Y., 2020. "Optimal carbon storage reservoir management through deep reinforcement learning," Applied Energy, Elsevier, vol. 278(C).
    12. Pratap, A. & Raja, R. & Cao, J. & Lim, C.P. & Bagdasar, O., 2019. "Stability and pinning synchronization analysis of fractional order delayed Cohen–Grossberg neural networks with discontinuous activations," Applied Mathematics and Computation, Elsevier, vol. 359(C), pages 241-260.
    13. Yassine Chemingui & Adel Gastli & Omar Ellabban, 2020. "Reinforcement Learning-Based School Energy Management System," Energies, MDPI, vol. 13(23), pages 1-21, December.
    14. Fendzi Donfack, Emmanuel & Nguenang, Jean Pierre & Nana, Laurent, 2020. "On the traveling waves in nonlinear electrical transmission lines with intrinsic fractional-order using discrete tanh method," Chaos, Solitons & Fractals, Elsevier, vol. 131(C).
    15. Woo Jae Byun & Bumkyu Choi & Seongmin Kim & Joohyun Jo, 2023. "Practical Application of Deep Reinforcement Learning to Optimal Trade Execution," FinTech, MDPI, vol. 2(3), pages 1-16, June.
    16. Lu, Yu & Xiang, Yue & Huang, Yuan & Yu, Bin & Weng, Liguo & Liu, Junyong, 2023. "Deep reinforcement learning based optimal scheduling of active distribution system considering distributed generation, energy storage and flexible load," Energy, Elsevier, vol. 271(C).
    17. Yuhong Wang & Lei Chen & Hong Zhou & Xu Zhou & Zongsheng Zheng & Qi Zeng & Li Jiang & Liang Lu, 2021. "Flexible Transmission Network Expansion Planning Based on DQN Algorithm," Energies, MDPI, vol. 14(7), pages 1-21, April.
    18. Li, Xing-Yu & Wu, Kai-Ning & Liu, Xiao-Zhen, 2023. "Mittag–Leffler stabilization for short memory fractional reaction-diffusion systems via intermittent boundary control," Applied Mathematics and Computation, Elsevier, vol. 449(C).
    19. Zhou, Zhipeng & Zhuo, Wen & Cui, Jianqiang & Luan, Haiying & Chen, Yudi & Lin, Dong, 2025. "Developing a deep reinforcement learning model for safety risk prediction at subway construction sites," Reliability Engineering and System Safety, Elsevier, vol. 257(PB).
    20. Huang, Ruchen & He, Hongwen & Gao, Miaojue, 2023. "Training-efficient and cost-optimal energy management for fuel cell hybrid electric bus based on a novel distributed deep reinforcement learning framework," Applied Energy, Elsevier, vol. 346(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:chsofr:v:192:y:2025:i:c:s0960077925000463. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Thayer, Thomas R. (email available below). General contact details of provider: https://www.journals.elsevier.com/chaos-solitons-and-fractals .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.