IDEAS home Printed from https://ideas.repec.org/a/gam/jeners/v16y2023i7p3248-d1116390.html
   My bibliography  Save this article

Multi-Microgrid Collaborative Optimization Scheduling Using an Improved Multi-Agent Soft Actor-Critic Algorithm

Author

Listed:
  • Jiankai Gao

    (School of Electrical Engineering, Northeast Electric Power University, Jilin 132012, China)

  • Yang Li

    (School of Electrical Engineering, Northeast Electric Power University, Jilin 132012, China)

  • Bin Wang

    (State Grid Jining Power Supply Company, Jining 272000, China)

  • Haibo Wu

    (School of Electrical Engineering, Northeast Electric Power University, Jilin 132012, China)

Abstract

The implementation of a multi-microgrid (MMG) system with multiple renewable energy sources enables the facilitation of electricity trading. To tackle the energy management problem of an MMG system, which consists of multiple renewable energy microgrids belonging to different operating entities, this paper proposes an MMG collaborative optimization scheduling model based on a multi-agent centralized training distributed execution framework. To enhance the generalization ability of dealing with various uncertainties, we also propose an improved multi-agent soft actor-critic (MASAC) algorithm, which facilitates energy transactions between multi-agents in MMG, and employs automated machine learning (AutoML) to optimize the MASAC hyperparameters to further improve the generalization of deep reinforcement learning (DRL). The test results demonstrate that the proposed method successfully achieves power complementarity between different entities and reduces the MMG system’s operating cost. Additionally, the proposal significantly outperforms other state-of-the-art reinforcement learning algorithms with better economy and higher calculation efficiency.

Suggested Citation

  • Jiankai Gao & Yang Li & Bin Wang & Haibo Wu, 2023. "Multi-Microgrid Collaborative Optimization Scheduling Using an Improved Multi-Agent Soft Actor-Critic Algorithm," Energies, MDPI, vol. 16(7), pages 1-21, April.
  • Handle: RePEc:gam:jeners:v:16:y:2023:i:7:p:3248-:d:1116390
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1996-1073/16/7/3248/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1996-1073/16/7/3248/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Gao, Yuan & Matsunami, Yuki & Miyata, Shohei & Akashi, Yasunori, 2022. "Multi-agent reinforcement learning dealing with hybrid action spaces: A case study for off-grid oriented renewable building energy system," Applied Energy, Elsevier, vol. 326(C).
    2. Hakimi, Seyed Mehdi & Hasankhani, Arezoo & Shafie-khah, Miadreza & Catalão, João P.S., 2021. "Stochastic planning of a multi-microgrid considering integration of renewable energy resources and real-time electricity market," Applied Energy, Elsevier, vol. 298(C).
    3. Li, Yang & Han, Meng & Shahidehpour, Mohammad & Li, Jiazheng & Long, Chao, 2023. "Data-driven distributionally robust scheduling of community integrated energy systems with uncertain renewable generations considering integrated demand response," Applied Energy, Elsevier, vol. 335(C).
    4. Chen, Weidong & Wang, Junnan & Yu, Guanyi & Chen, Jiajia & Hu, Yumeng, 2022. "Research on day-ahead transactions between multi-microgrid based on cooperative game model," Applied Energy, Elsevier, vol. 316(C).
    5. Park, Keonwoo & Moon, Ilkyeong, 2022. "Multi-agent deep reinforcement learning approach for EV charging scheduling in a smart grid," Applied Energy, Elsevier, vol. 328(C).
    6. Soleimanzade, Mohammad Amin & Kumar, Amit & Sadrzadeh, Mohtada, 2022. "Novel data-driven energy management of a hybrid photovoltaic-reverse osmosis desalination system using deep reinforcement learning," Applied Energy, Elsevier, vol. 317(C).
    7. Li, Yang & Feng, Bo & Wang, Bin & Sun, Shuchao, 2022. "Joint planning of distributed generations and energy storage in active distribution networks: A Bi-Level programming approach," Energy, Elsevier, vol. 245(C).
    8. Qiu, Dawei & Wang, Yi & Sun, Mingyang & Strbac, Goran, 2022. "Multi-service provision for electric vehicles in power-transportation networks towards a low-carbon transition: A hierarchical and hybrid multi-agent reinforcement learning approach," Applied Energy, Elsevier, vol. 313(C).
    9. Li, Yang & Wang, Ruinong & Li, Yuanzheng & Zhang, Meng & Long, Chao, 2023. "Wind power forecasting considering data privacy protection: A federated deep reinforcement learning approach," Applied Energy, Elsevier, vol. 329(C).
    10. Zhang, Xizheng & Wang, Zeyu & Lu, Zhangyu, 2022. "Multi-objective load dispatch for microgrid with electric vehicles using modified gravitational search and particle swarm optimization algorithm," Applied Energy, Elsevier, vol. 306(PA).
    11. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    12. Xie, Peilin & Tan, Sen & Bazmohammadi, Najmeh & Guerrero, Josep. M. & Vasquez, Juan. C. & Alcala, Jose Matas & Carreño, Jorge El Mariachet, 2022. "A distributed real-time power management scheme for shipboard zonal multi-microgrid system," Applied Energy, Elsevier, vol. 317(C).
    13. Wang, Yong & Wu, Yuankai & Tang, Yingjuan & Li, Qin & He, Hongwen, 2023. "Cooperative energy management and eco-driving of plug-in hybrid electric vehicle via multi-agent reinforcement learning," Applied Energy, Elsevier, vol. 332(C).
    14. Parlikar, Anupam & Schott, Maximilian & Godse, Ketaki & Kucevic, Daniel & Jossen, Andreas & Hesse, Holger, 2023. "High-power electric vehicle charging: Low-carbon grid integration pathways with stationary lithium-ion battery systems and renewable generation," Applied Energy, Elsevier, vol. 333(C).
    15. Li, Yang & Bu, Fanjin & Li, Yuanzheng & Long, Chao, 2023. "Optimal scheduling of island integrated energy systems considering multi-uncertainties and hydrothermal simultaneous transmission: A deep reinforcement learning approach," Applied Energy, Elsevier, vol. 333(C).
    16. Nawaz, Arshad & Wu, Jing & Ye, Jun & Dong, Yidi & Long, Chengnian, 2023. "Distributed MPC-based energy scheduling for islanded multi-microgrid considering battery degradation and cyclic life deterioration," Applied Energy, Elsevier, vol. 329(C).
    17. Li, Yang & Wang, Bin & Yang, Zhen & Li, Jiazheng & Chen, Chen, 2022. "Hierarchical stochastic scheduling of multi-community integrated energy systems in uncertain environments via Stackelberg game," Applied Energy, Elsevier, vol. 308(C).
    18. Kim, H.J. & Kim, M.K., 2023. "A novel deep learning-based forecasting model optimized by heuristic algorithm for energy management of microgrid," Applied Energy, Elsevier, vol. 332(C).
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Li, Yang & Wang, Ruinong & Li, Yuanzheng & Zhang, Meng & Long, Chao, 2023. "Wind power forecasting considering data privacy protection: A federated deep reinforcement learning approach," Applied Energy, Elsevier, vol. 329(C).
    2. Li, Yang & Han, Meng & Shahidehpour, Mohammad & Li, Jiazheng & Long, Chao, 2023. "Data-driven distributionally robust scheduling of community integrated energy systems with uncertain renewable generations considering integrated demand response," Applied Energy, Elsevier, vol. 335(C).
    3. Li, Yang & Bu, Fanjin & Li, Yuanzheng & Long, Chao, 2023. "Optimal scheduling of island integrated energy systems considering multi-uncertainties and hydrothermal simultaneous transmission: A deep reinforcement learning approach," Applied Energy, Elsevier, vol. 333(C).
    4. Jani, Ali & Jadid, Shahram, 2023. "Two-stage energy scheduling framework for multi-microgrid system in market environment," Applied Energy, Elsevier, vol. 336(C).
    5. Zhang, Bin & Hu, Weihao & Xu, Xiao & Li, Tao & Zhang, Zhenyuan & Chen, Zhe, 2022. "Physical-model-free intelligent energy management for a grid-connected hybrid wind-microturbine-PV-EV energy system via deep reinforcement learning approach," Renewable Energy, Elsevier, vol. 200(C), pages 433-448.
    6. Han, Fengwu & Zeng, Jianfeng & Lin, Junjie & Zhao, Yunlong & Gao, Chong, 2023. "A stochastic hierarchical optimization and revenue allocation approach for multi-regional integrated energy systems based on cooperative games," Applied Energy, Elsevier, vol. 350(C).
    7. Wei Wei & Li Ye & Yi Fang & Yingchun Wang & Xi Chen & Zhenhua Li, 2023. "Optimal Allocation of Energy Storage Capacity in Microgrids Considering the Uncertainty of Renewable Energy Generation," Sustainability, MDPI, vol. 15(12), pages 1-17, June.
    8. Qiu, Dawei & Wang, Yi & Hua, Weiqi & Strbac, Goran, 2023. "Reinforcement learning for electric vehicle applications in power systems:A critical review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 173(C).
    9. Wang, Jiangjiang & Deng, Hongda & Qi, Xiaoling, 2022. "Cost-based site and capacity optimization of multi-energy storage system in the regional integrated energy networks," Energy, Elsevier, vol. 261(PA).
    10. Tulika Saha & Sriparna Saha & Pushpak Bhattacharyya, 2020. "Towards sentiment aided dialogue policy learning for multi-intent conversations using hierarchical reinforcement learning," PLOS ONE, Public Library of Science, vol. 15(7), pages 1-28, July.
    11. Mahmoud Mahfouz & Angelos Filos & Cyrine Chtourou & Joshua Lockhart & Samuel Assefa & Manuela Veloso & Danilo Mandic & Tucker Balch, 2019. "On the Importance of Opponent Modeling in Auction Markets," Papers 1911.12816, arXiv.org.
    12. Woo Jae Byun & Bumkyu Choi & Seongmin Kim & Joohyun Jo, 2023. "Practical Application of Deep Reinforcement Learning to Optimal Trade Execution," FinTech, MDPI, vol. 2(3), pages 1-16, June.
    13. Lu, Yu & Xiang, Yue & Huang, Yuan & Yu, Bin & Weng, Liguo & Liu, Junyong, 2023. "Deep reinforcement learning based optimal scheduling of active distribution system considering distributed generation, energy storage and flexible load," Energy, Elsevier, vol. 271(C).
    14. Yuhong Wang & Lei Chen & Hong Zhou & Xu Zhou & Zongsheng Zheng & Qi Zeng & Li Jiang & Liang Lu, 2021. "Flexible Transmission Network Expansion Planning Based on DQN Algorithm," Energies, MDPI, vol. 14(7), pages 1-21, April.
    15. Michelle M. LaMar, 2018. "Markov Decision Process Measurement Model," Psychometrika, Springer;The Psychometric Society, vol. 83(1), pages 67-88, March.
    16. Yang, Ting & Zhao, Liyuan & Li, Wei & Zomaya, Albert Y., 2021. "Dynamic energy dispatch strategy for integrated energy system based on improved deep reinforcement learning," Energy, Elsevier, vol. 235(C).
    17. Wang, Yi & Qiu, Dawei & Sun, Mingyang & Strbac, Goran & Gao, Zhiwei, 2023. "Secure energy management of multi-energy microgrid: A physical-informed safe reinforcement learning approach," Applied Energy, Elsevier, vol. 335(C).
    18. Mingshan Mo & Xinrui Xiong & Yunlong Wu & Zuyao Yu, 2023. "Deep-Reinforcement-Learning-Based Low-Carbon Economic Dispatch for Community-Integrated Energy System under Multiple Uncertainties," Energies, MDPI, vol. 16(22), pages 1-18, November.
    19. Neha Soni & Enakshi Khular Sharma & Narotam Singh & Amita Kapoor, 2019. "Impact of Artificial Intelligence on Businesses: from Research, Innovation, Market Deployment to Future Shifts in Business Models," Papers 1905.02092, arXiv.org.
    20. Ande Chang & Yuting Ji & Chunguang Wang & Yiming Bie, 2024. "CVDMARL: A Communication-Enhanced Value Decomposition Multi-Agent Reinforcement Learning Traffic Signal Control Method," Sustainability, MDPI, vol. 16(5), pages 1-17, March.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jeners:v:16:y:2023:i:7:p:3248-:d:1116390. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.