IDEAS home Printed from https://ideas.repec.org/a/eee/apmaco/v489y2025ics0096300324006106.html
   My bibliography  Save this article

Event-triggered approximately optimized formation control of multi-agent systems with unknown disturbances via simplified reinforcement learning

Author

Listed:
  • Yang, Yang
  • Geng, Shuocong
  • Yue, Dong
  • Gorbachev, Sergey
  • Korovin, Iakov

Abstract

An event-triggered formation control strategy is proposed for a multi-agent system (MAS) suffered from unknown disturbances. In identifier-critic-actor neural networks (NNs), the strategy only needs to calculate the negative gradient of an approximated Hamilton-Jacobi-Bellman (HJB) equation, instead of the gradient descent method associated with Bellman residual errors. This simplified method removes the requirement for a complicated gradient calculation process of residual square of HJB equation. The weights in critic-actor NNs only update as the triggered condition is violated, and the computational burdens caused by frequent updates are thus reduced. Without dynamics information in prior, a disturbance observer is also constructed to approximate disturbances in an MAS. From stability analysis, it is proven that all signals are bounded. Two numerical examples are illustrated to verify that the proposed control strategy is effective.

Suggested Citation

  • Yang, Yang & Geng, Shuocong & Yue, Dong & Gorbachev, Sergey & Korovin, Iakov, 2025. "Event-triggered approximately optimized formation control of multi-agent systems with unknown disturbances via simplified reinforcement learning," Applied Mathematics and Computation, Elsevier, vol. 489(C).
  • Handle: RePEc:eee:apmaco:v:489:y:2025:i:c:s0096300324006106
    DOI: 10.1016/j.amc.2024.129149
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0096300324006106
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.amc.2024.129149?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Kou, Peng & Liang, Deliang & Wang, Chen & Wu, Zihao & Gao, Lin, 2020. "Safe deep reinforcement learning-based constrained optimal control scheme for active distribution networks," Applied Energy, Elsevier, vol. 264(C).
    2. Hao Sheng & Xia Liu, 2020. "Composite Compensation Control of Robotic System Subject to External Disturbance and Various Actuator Faults," Mathematical Problems in Engineering, Hindawi, vol. 2020, pages 1-11, July.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Oh, Seok Hwa & Yoon, Yong Tae & Kim, Seung Wan, 2020. "Online reconfiguration scheme of self-sufficient distribution network based on a reinforcement learning approach," Applied Energy, Elsevier, vol. 280(C).
    2. Wang, Yi & Qiu, Dawei & Sun, Mingyang & Strbac, Goran & Gao, Zhiwei, 2023. "Secure energy management of multi-energy microgrid: A physical-informed safe reinforcement learning approach," Applied Energy, Elsevier, vol. 335(C).
    3. Zhang, Yiwen & Lin, Rui & Mei, Zhen & Lyu, Minghao & Jiang, Huaiguang & Xue, Ying & Zhang, Jun & Gao, David Wenzhong, 2024. "Interior-point policy optimization based multi-agent deep reinforcement learning method for secure home energy management under various uncertainties," Applied Energy, Elsevier, vol. 376(PA).
    4. He, Wangli & Li, Chengyuan & Cai, Chenhao & Qing, Xiangyun & Du, Wenli, 2024. "Suppressing active power fluctuations at PCC in grid-connection microgrids via multiple BESSs: A collaborative multi-agent reinforcement learning approach," Applied Energy, Elsevier, vol. 373(C).
    5. Jude Suchithra & Amin Rajabi & Duane A. Robinson, 2024. "Enhancing PV Hosting Capacity of Electricity Distribution Networks Using Deep Reinforcement Learning-Based Coordinated Voltage Control," Energies, MDPI, vol. 17(20), pages 1-27, October.
    6. Zhu, Dafeng & Yang, Bo & Liu, Yuxiang & Wang, Zhaojian & Ma, Kai & Guan, Xinping, 2022. "Energy management based on multi-agent deep reinforcement learning for a multi-energy industrial park," Applied Energy, Elsevier, vol. 311(C).
    7. Se-Heon Lim & Sung-Guk Yoon, 2022. "Dynamic DNR and Solar PV Smart Inverter Control Scheme Using Heterogeneous Multi-Agent Deep Reinforcement Learning," Energies, MDPI, vol. 15(23), pages 1-18, December.
    8. Zhang, Zhengfa & da Silva, Filipe Faria & Guo, Yifei & Bak, Claus Leth & Chen, Zhe, 2021. "Double-layer stochastic model predictive voltage control in active distribution networks with high penetration of renewables," Applied Energy, Elsevier, vol. 302(C).
    9. Kewei Wang & Yonghong Huang & Junjun Xu & Yanbo Liu, 2024. "A Flexible Envelope Method for the Operation Domain of Distribution Networks Based on “Degree of Squareness” Adjustable Superellipsoid," Energies, MDPI, vol. 17(16), pages 1-19, August.
    10. Chen, Yongdong & Liu, Youbo & Zhao, Junbo & Qiu, Gao & Yin, Hang & Li, Zhengbo, 2023. "Physical-assisted multi-agent graph reinforcement learning enabled fast voltage regulation for PV-rich active distribution network," Applied Energy, Elsevier, vol. 351(C).
    11. Qingyan Li & Tao Lin & Qianyi Yu & Hui Du & Jun Li & Xiyue Fu, 2023. "Review of Deep Reinforcement Learning and Its Application in Modern Renewable Power System Control," Energies, MDPI, vol. 16(10), pages 1-23, May.
    12. Gong, Xun & Wang, Xiaozhe & Cao, Bo, 2023. "On data-driven modeling and control in modern power grids stability: Survey and perspective," Applied Energy, Elsevier, vol. 350(C).
    13. Du, Yan & Zandi, Helia & Kotevska, Olivera & Kurte, Kuldeep & Munk, Jeffery & Amasyali, Kadir & Mckee, Evan & Li, Fangxing, 2021. "Intelligent multi-zone residential HVAC control strategy based on deep reinforcement learning," Applied Energy, Elsevier, vol. 281(C).
    14. Mudhafar Al-Saadi & Maher Al-Greer & Michael Short, 2023. "Reinforcement Learning-Based Intelligent Control Strategies for Optimal Power Management in Advanced Power Distribution Systems: A Survey," Energies, MDPI, vol. 16(4), pages 1-38, February.
    15. Prabawa, Panggah & Choi, Dae-Hyun, 2024. "Safe deep reinforcement learning-assisted two-stage energy management for active power distribution networks with hydrogen fueling stations," Applied Energy, Elsevier, vol. 375(C).
    16. Lu, Renzhi & Li, Yi-Chang & Li, Yuting & Jiang, Junhui & Ding, Yuemin, 2020. "Multi-agent deep reinforcement learning based demand response for discrete manufacturing systems energy management," Applied Energy, Elsevier, vol. 276(C).
    17. Sayed, Ahmed & Jaafari, Khaled Al & Eldin, Hatem Zein & Al-Durra, Ahmed & Elsaadany, Ehab, 2025. "Feasibility-guaranteed unsupervised deep learning for real-time energy management in integrated electricity and gas systems," Energy, Elsevier, vol. 316(C).
    18. Wu, Zhi & Li, Yiqi & Zhang, Xiao & Zheng, Shu & Zhao, Jingtao, 2025. "Distributed voltage control for multi-feeder distribution networks considering transmission network voltage fluctuation based on robust deep reinforcement learning," Applied Energy, Elsevier, vol. 379(C).
    19. Jude Suchithra & Duane A. Robinson & Amin Rajabi, 2024. "A Model-Free Deep Reinforcement Learning-Based Approach for Assessment of Real-Time PV Hosting Capacity," Energies, MDPI, vol. 17(9), pages 1-12, April.
    20. Zhang, Xiao & Wu, Zhi & Sun, Qirun & Gu, Wei & Zheng, Shu & Zhao, Jingtao, 2024. "Application and progress of artificial intelligence technology in the field of distribution network voltage Control:A review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 192(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:apmaco:v:489:y:2025:i:c:s0096300324006106. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: https://www.journals.elsevier.com/applied-mathematics-and-computation .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.