IDEAS home Printed from https://ideas.repec.org/a/eee/appene/v307y2022ics0306261921014604.html
   My bibliography  Save this article

AC/DC hybrid distribution network reconfiguration with microgrid formation using multi-agent soft actor-critic

Author

Listed:
  • Wu, Tao
  • Wang, Jianhui
  • Lu, Xiaonan
  • Du, Yuhua

Abstract

Recent extreme events trigger tremendous concerns on distribution system resilience. Meanwhile, high penetration of inverter-interfaced distributed generators (DGs) and diversified source and load mix facilitate the development and implementation of hybrid AC and DC distribution networks (HDNs). This paper proposes a deep reinforcement learning-based (DRL) approach for distribution network reconfiguration with microgrid formation in face of extreme events. The proposed optimization model facilitates critical service restoration by forming isolated sections nested inside the HDNs when severe power outages occur (e.g., disconnection from the main grid). The operational characteristics of isolated HDNs (e.g., droop-controlled nodes in AC and DC sections, lack of slack buses in autonomous operation, etc.) are considered. To reduce the computational burden, a multi-agent soft actor-critic (MA-SAC) approach is developed to solve the proposed reconfiguration problem, where multiple agents coordinately control circuit breakers to sectionalize the HDNs and can cater for different system states and scales. Simulation tests are conducted in two test systems to verify the validity of the proposed approach.

Suggested Citation

  • Wu, Tao & Wang, Jianhui & Lu, Xiaonan & Du, Yuhua, 2022. "AC/DC hybrid distribution network reconfiguration with microgrid formation using multi-agent soft actor-critic," Applied Energy, Elsevier, vol. 307(C).
  • Handle: RePEc:eee:appene:v:307:y:2022:i:c:s0306261921014604
    DOI: 10.1016/j.apenergy.2021.118189
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0306261921014604
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.apenergy.2021.118189?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Ning Xin & Laijun Chen & Linrui Ma & Yang Si, 2022. "A Rolling Horizon Optimization Framework for Resilient Restoration of Active Distribution Systems," Energies, MDPI, vol. 15(9), pages 1-14, April.
    2. Qiu, Dawei & Wang, Yi & Zhang, Tingqi & Sun, Mingyang & Strbac, Goran, 2023. "Hierarchical multi-agent reinforcement learning for repair crews dispatch control towards multi-energy microgrid resilience," Applied Energy, Elsevier, vol. 336(C).
    3. Zhang, Lu & Yu, Shunjiang & Zhang, Bo & Li, Gen & Cai, Yongxiang & Tang, Wei, 2023. "Outage management of hybrid AC/DC distribution systems: Co-optimize service restoration with repair crew and mobile energy storage system dispatch," Applied Energy, Elsevier, vol. 335(C).
    4. Xie, Haipeng & Tang, Lingfeng & Zhu, Hao & Cheng, Xiaofeng & Bie, Zhaohong, 2023. "Robustness assessment and enhancement of deep reinforcement learning-enabled load restoration for distribution systems," Reliability Engineering and System Safety, Elsevier, vol. 237(C).
    5. Mudhafar Al-Saadi & Maher Al-Greer & Michael Short, 2023. "Reinforcement Learning-Based Intelligent Control Strategies for Optimal Power Management in Advanced Power Distribution Systems: A Survey," Energies, MDPI, vol. 16(4), pages 1-38, February.
    6. Mohammad Javad Bordbari & Fuzhan Nasiri, 2024. "Networked Microgrids: A Review on Configuration, Operation, and Control Strategies," Energies, MDPI, vol. 17(3), pages 1-28, February.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Tulika Saha & Sriparna Saha & Pushpak Bhattacharyya, 2020. "Towards sentiment aided dialogue policy learning for multi-intent conversations using hierarchical reinforcement learning," PLOS ONE, Public Library of Science, vol. 15(7), pages 1-28, July.
    2. Mahmoud Mahfouz & Angelos Filos & Cyrine Chtourou & Joshua Lockhart & Samuel Assefa & Manuela Veloso & Danilo Mandic & Tucker Balch, 2019. "On the Importance of Opponent Modeling in Auction Markets," Papers 1911.12816, arXiv.org.
    3. Woo Jae Byun & Bumkyu Choi & Seongmin Kim & Joohyun Jo, 2023. "Practical Application of Deep Reinforcement Learning to Optimal Trade Execution," FinTech, MDPI, vol. 2(3), pages 1-16, June.
    4. Lu, Yu & Xiang, Yue & Huang, Yuan & Yu, Bin & Weng, Liguo & Liu, Junyong, 2023. "Deep reinforcement learning based optimal scheduling of active distribution system considering distributed generation, energy storage and flexible load," Energy, Elsevier, vol. 271(C).
    5. Yuhong Wang & Lei Chen & Hong Zhou & Xu Zhou & Zongsheng Zheng & Qi Zeng & Li Jiang & Liang Lu, 2021. "Flexible Transmission Network Expansion Planning Based on DQN Algorithm," Energies, MDPI, vol. 14(7), pages 1-21, April.
    6. Michelle M. LaMar, 2018. "Markov Decision Process Measurement Model," Psychometrika, Springer;The Psychometric Society, vol. 83(1), pages 67-88, March.
    7. Yang, Ting & Zhao, Liyuan & Li, Wei & Zomaya, Albert Y., 2021. "Dynamic energy dispatch strategy for integrated energy system based on improved deep reinforcement learning," Energy, Elsevier, vol. 235(C).
    8. Wang, Yi & Qiu, Dawei & Sun, Mingyang & Strbac, Goran & Gao, Zhiwei, 2023. "Secure energy management of multi-energy microgrid: A physical-informed safe reinforcement learning approach," Applied Energy, Elsevier, vol. 335(C).
    9. Neha Soni & Enakshi Khular Sharma & Narotam Singh & Amita Kapoor, 2019. "Impact of Artificial Intelligence on Businesses: from Research, Innovation, Market Deployment to Future Shifts in Business Models," Papers 1905.02092, arXiv.org.
    10. Ande Chang & Yuting Ji & Chunguang Wang & Yiming Bie, 2024. "CVDMARL: A Communication-Enhanced Value Decomposition Multi-Agent Reinforcement Learning Traffic Signal Control Method," Sustainability, MDPI, vol. 16(5), pages 1-17, March.
    11. Sun, Hongchang & Niu, Yanlei & Li, Chengdong & Zhou, Changgeng & Zhai, Wenwen & Chen, Zhe & Wu, Hao & Niu, Lanqiang, 2022. "Energy consumption optimization of building air conditioning system via combining the parallel temporal convolutional neural network and adaptive opposition-learning chimp algorithm," Energy, Elsevier, vol. 259(C).
    12. Zhang, Yang & Yang, Qingyu & Li, Donghe & An, Dou, 2022. "A reinforcement and imitation learning method for pricing strategy of electricity retailer with customers’ flexibility," Applied Energy, Elsevier, vol. 323(C).
    13. He, Jing & Liu, Xinglu & Duan, Qiyao & Chan, Wai Kin (Victor) & Qi, Mingyao, 2023. "Reinforcement learning for multi-item retrieval in the puzzle-based storage system," European Journal of Operational Research, Elsevier, vol. 305(2), pages 820-837.
    14. Holger Mohr & Katharina Zwosta & Dimitrije Markovic & Sebastian Bitzer & Uta Wolfensteller & Hannes Ruge, 2018. "Deterministic response strategies in a trial-and-error learning task," PLOS Computational Biology, Public Library of Science, vol. 14(11), pages 1-19, November.
    15. Taejong Joo & Hyunyoung Jun & Dongmin Shin, 2022. "Task Allocation in Human–Machine Manufacturing Systems Using Deep Reinforcement Learning," Sustainability, MDPI, vol. 14(4), pages 1-18, February.
    16. Sebastian Jaimungal, 2022. "Reinforcement learning and stochastic optimisation," Finance and Stochastics, Springer, vol. 26(1), pages 103-129, January.
    17. Timotei Lala & Darius-Pavel Chirla & Mircea-Bogdan Radac, 2021. "Model Reference Tracking Control Solutions for a Visual Servo System Based on a Virtual State from Unknown Dynamics," Energies, MDPI, vol. 15(1), pages 1-25, December.
    18. Oleh Lukianykhin & Tetiana Bogodorova, 2021. "Voltage Control-Based Ancillary Service Using Deep Reinforcement Learning," Energies, MDPI, vol. 14(8), pages 1-22, April.
    19. Liu, Zhichen & Li, Ying & Zhang, Zhaoyi & Yu, Wenbo, 2022. "A new evacuation accessibility analysis approach based on spatial information," Reliability Engineering and System Safety, Elsevier, vol. 222(C).
    20. Emilio Calvano & Giacomo Calzolari & Vincenzo Denicolò & Sergio Pastorello, 2019. "Algorithmic Pricing What Implications for Competition Policy?," Review of Industrial Organization, Springer;The Industrial Organization Society, vol. 55(1), pages 155-171, August.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:307:y:2022:i:c:s0306261921014604. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.