IDEAS home Printed from https://ideas.repec.org/a/eee/appene/v399y2025ics0306261925011535.html

Hybrid multi-agent deep reinforcement learning for multi-type mobile resources dispatching under transportation and power network recovery

Author

Listed:
  • Sun, Shaohua
  • Li, Gengfeng
  • Bie, Zhaohong
  • Zhang, Dingmao
  • Huang, Yuxiong

Abstract

Rainstorm waterlogging or typhoon can not only cause seriously failure of power network (PN), but also damage the normal traffic of transportation network (TN). Equipment fault of PN affects normal power supply of critical loads, and the interruption of TN severely limits the flexible transfer of mobile resources for recovery of transportation and power network (TPN). Previous work only addresses dispatching of multi-type mobile resources (MMRs) for power network recovery on the assumption of healthy TN, which makes dispatching strategy impractical. To fill this gap, this paper proposes a dispatching model of MMRs for collaborative recovery of TPN, embedding road repair crews (RRCs) dispatching behaviors into road repair constraints. To solve the above model, firstly road island and various topology update strategies are introduced to simplify shortest path searching for MMRs routing. Then, the dispatching model of MMRs is described as a parameterized action Markov decision process, in which MMRs are modeled as different types of intelligent agents considering various discrete-continuous dispatching characteristics. And, a hybrid multi-agent deep reinforcement learning (HMADRL) method characterizing master-slave architecture is developed to improve the solving efficiency and convergence speed of model, where the master module describes the recovery process of TN with dispatching of RRCs, and the slave module is constructed to recovery PN based on the path update strategies. The case analysis based on 15-node PN (18-node TN), 33-node PN (45-node TN) and practical example demonstrates that this approach can elevate the practicality of dispatching strategies and the recovery efficiency of TPN.

Suggested Citation

  • Sun, Shaohua & Li, Gengfeng & Bie, Zhaohong & Zhang, Dingmao & Huang, Yuxiong, 2025. "Hybrid multi-agent deep reinforcement learning for multi-type mobile resources dispatching under transportation and power network recovery," Applied Energy, Elsevier, vol. 399(C).
  • Handle: RePEc:eee:appene:v:399:y:2025:i:c:s0306261925011535
    DOI: 10.1016/j.apenergy.2025.126423
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0306261925011535
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.apenergy.2025.126423?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to

    for a different version of it.

    References listed on IDEAS

    as
    1. Perera, A.T.D. & Kamalaruban, Parameswaran, 2021. "Applications of reinforcement learning in energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 137(C).
    2. Wang, Yi & Qiu, Dawei & Strbac, Goran, 2022. "Multi-agent deep reinforcement learning for resilience-driven routing and scheduling of mobile energy storage systems," Applied Energy, Elsevier, vol. 310(C).
    3. Du, Ying & Zhang, Junxiang & Chen, Yuntian & Liu, Zhengguang & Zhang, Haoran & Ji, Haoran & Wang, Chengshan & Yan, Jinyue, 2025. "Impact of electric vehicles on post-disaster power supply restoration of urban distribution systems," Applied Energy, Elsevier, vol. 383(C).
    4. Zhang, Wangxin & Han, Qiang & Shang, Wen-Long & Xu, Chengshun, 2024. "Seismic resilience assessment of interdependent urban transportation-electric power system under uncertainty," Transportation Research Part A: Policy and Practice, Elsevier, vol. 183(C).
    5. Shang, Yuwei & Wu, Wenchuan & Guo, Jianbo & Ma, Zhao & Sheng, Wanxing & Lv, Zhe & Fu, Chenran, 2020. "Stochastic dispatch of energy storage in microgrids: An augmented reinforcement learning approach," Applied Energy, Elsevier, vol. 261(C).
    6. Qiu, Dawei & Wang, Yi & Hua, Weiqi & Strbac, Goran, 2023. "Reinforcement learning for electric vehicle applications in power systems:A critical review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 173(C).
    7. Sun, Shaohua & Li, Gengfeng & Yang, Qiming & Bie, Zhaohong, 2024. "Co-optimize recovery modeling for transportation and power network with multi-type mobile resources dispatching," Applied Energy, Elsevier, vol. 366(C).
    8. Alqahtani, Mohammed & Hu, Mengqi, 2022. "Dynamic energy scheduling and routing of multiple electric vehicles using deep reinforcement learning," Energy, Elsevier, vol. 244(PA).
    9. Qiu, Dawei & Wang, Yi & Zhang, Tingqi & Sun, Mingyang & Strbac, Goran, 2023. "Hierarchical multi-agent reinforcement learning for repair crews dispatch control towards multi-energy microgrid resilience," Applied Energy, Elsevier, vol. 336(C).
    10. Juan P. Montoya-Rincon & Said A. Mejia-Manrique & Shams Azad & Masoud Ghandehari & Eric W. Harmsen & Reza Khanbilvardi & Jorge E. Gonzalez-Cruz, 2023. "A socio-technical approach for the assessment of critical infrastructure system vulnerability in extreme weather events," Nature Energy, Nature, vol. 8(9), pages 1002-1012, September.
    11. Zhong, Jian & Chen, Chen & Zhang, Haochen & Shen, Wentao & Fan, Zhong & Qiu, Dawei & Bie, Zhaohong, 2025. "Resilient mobile energy storage resources-based microgrid formation considering power-transportation-information network interdependencies," Applied Energy, Elsevier, vol. 389(C).
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Li, Sichen & Hu, Weihao & Cao, Di & Chen, Zhe & Huang, Qi & Blaabjerg, Frede & Liao, Kaiji, 2023. "Physics-model-free heat-electricity energy management of multiple microgrids based on surrogate model-enabled multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 346(C).
    2. Qiu, Dawei & Wang, Yi & Hua, Weiqi & Strbac, Goran, 2023. "Reinforcement learning for electric vehicle applications in power systems:A critical review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 173(C).
    3. Su, Chutian & Wang, Yi & Strbac, Goran, 2025. "Coordinated electric vehicles dispatch for multi-service provisions: A comprehensive review of modelling and coordination approaches," Renewable and Sustainable Energy Reviews, Elsevier, vol. 223(C).
    4. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    5. Zhu, Ziqing & Hu, Ze & Chan, Ka Wing & Bu, Siqi & Zhou, Bin & Xia, Shiwei, 2023. "Reinforcement learning in deregulated energy market: A comprehensive review," Applied Energy, Elsevier, vol. 329(C).
    6. Harrold, Daniel J.B. & Cao, Jun & Fan, Zhong, 2022. "Renewable energy integration and microgrid energy trading using multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 318(C).
    7. Xu, Hairun & Zhang, Ao & Wang, Qingle & Hu, Yang & Fang, Fang & Cheng, Long, 2025. "Quantum Reinforcement Learning for real-time optimization in Electric Vehicle charging systems," Applied Energy, Elsevier, vol. 383(C).
    8. Yang, Ruizhang & Xiao, Zhuang & Xiong, Wei & Hou, Yunhe, 2026. "Coordinative multi-stage approach to railway energy system resilience enhancement: From risk-aware FTPSS planning to emergency energy management and adaptive train control," Applied Energy, Elsevier, vol. 402(PB).
    9. Mahmud, Sakib & Sayed, Aya Nabil & Himeur, Yassine & Nhlabatsi, Armstrong & Bensaali, Faycal, 2026. "A comprehensive review of deep reinforcement learning applications from centralized power generation to modern energy internet frameworks," Renewable and Sustainable Energy Reviews, Elsevier, vol. 226(PE).
    10. Zhou, Xinlei & Du, Han & Xue, Shan & Ma, Zhenjun, 2024. "Recent advances in data mining and machine learning for enhanced building energy management," Energy, Elsevier, vol. 307(C).
    11. Lu, Qing-Chang & Wang, Shixin & Xu, Peng-Cheng & Li, Jing & Meng, Xu & Hussain, Adil, 2025. "Modeling the dependency relationship of coupled power and transportation networks," Energy, Elsevier, vol. 320(C).
    12. Qiu, Dawei & Wang, Yi & Zhang, Tingqi & Sun, Mingyang & Strbac, Goran, 2023. "Hierarchical multi-agent reinforcement learning for repair crews dispatch control towards multi-energy microgrid resilience," Applied Energy, Elsevier, vol. 336(C).
    13. Hu, Xiaorui & Guo, Haotian & Lao, Keng-Weng & Hao, Junkun & Liu, Fengrui & Ren, Zhongyu, 2025. "MiniRocket-MARL synergy for storm tide resilience: MESS-DV enhanced recovery in coastal distribution networks," Applied Energy, Elsevier, vol. 401(PB).
    14. Wang, Yi & Qiu, Dawei & He, Yinglong & Zhou, Quan & Strbac, Goran, 2023. "Multi-agent reinforcement learning for electric vehicle decarbonized routing and scheduling," Energy, Elsevier, vol. 284(C).
    15. Song, Yingjie & Ngoduy, Dong & Ding, Chuan, 2026. "Coordinated emergency resource allocation for resilience enhancement in post-disaster electrified transportation networks," Journal of Transport Geography, Elsevier, vol. 130(C).
    16. Yao, Ganzhou & Luo, Zirong & Lu, Zhongyue & Wang, Mangkuan & Shang, Jianzhong & Guerrerob, Josep M., 2023. "Unlocking the potential of wave energy conversion: A comprehensive evaluation of advanced maximum power point tracking techniques and hybrid strategies for sustainable energy harvesting," Renewable and Sustainable Energy Reviews, Elsevier, vol. 185(C).
    17. Liangcai Zhou & Long Huo & Linlin Liu & Hao Xu & Rui Chen & Xin Chen, 2025. "Optimal Power Flow for High Spatial and Temporal Resolution Power Systems with High Renewable Energy Penetration Using Multi-Agent Deep Reinforcement Learning," Energies, MDPI, vol. 18(7), pages 1-14, April.
    18. An, Sihai & Qiu, Jing & Lin, Jiafeng & Yao, Zongyu & Liang, Qijun & Lu, Xin, 2025. "Planning of a multi-agent mobile robot-based adaptive charging network for enhancing power system resilience under extreme conditions," Applied Energy, Elsevier, vol. 395(C).
    19. Lan, Penghang & Chen, She & Li, Qihang & Li, Kelin & Wang, Feng & Zhao, Yaoxun, 2024. "Intelligent hydrogen-ammonia combined energy storage system with deep reinforcement learning," Renewable Energy, Elsevier, vol. 237(PB).
    20. Ahmad, Tanveer & Madonski, Rafal & Zhang, Dongdong & Huang, Chao & Mujeeb, Asad, 2022. "Data-driven probabilistic machine learning in sustainable smart energy/smart energy systems: Key developments, challenges, and future research opportunities in the context of smart grid paradigm," Renewable and Sustainable Energy Reviews, Elsevier, vol. 160(C).

    More about this item

    Keywords

    ;
    ;
    ;
    ;
    ;
    ;

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:399:y:2025:i:c:s0306261925011535. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.