IDEAS home Printed from https://ideas.repec.org/a/eee/energy/v271y2023ics0360544223004814.html
   My bibliography  Save this article

Deep reinforcement learning based optimal scheduling of active distribution system considering distributed generation, energy storage and flexible load

Author

Listed:
  • Lu, Yu
  • Xiang, Yue
  • Huang, Yuan
  • Yu, Bin
  • Weng, Liguo
  • Liu, Junyong

Abstract

The increasing integration of distributed resources, such as distributed generations (DGs), energy storage systems (ESSs), and flexible loads (FLs), has ushered in a new era for the active distribution system (ADS), characterized by more reliable, economical, and low-carbon. Nonetheless, with the increase in number and variety, how to realize self-consistent and self-optimal operation among these distributed resources has become major challenge for ADS. In this paper, a multi-agent deep reinforcement learning (MADRL) based algorithm with strategic goals of the real-time optimal scheduling of ADS is proposed, in which the uncertainty of renewable generations (RDGs), loads and electricity price are considered. The control variables contain the active and reactive power of dispatchable thermal DGs, the reactive power of photovoltaic and wind turbine DGs, the exchange power of ESSs, and the demand response (DR) of FLs. Besides, the region ownership of distributed resources is considered in our MADRL framework to resolve the partitioned optimization problem in large-scale ADS. Finally, the effectiveness and superiority of the proposed algorithm are demonstrated on the 33-node and 152-node active distribution system, including the terms of cost-effective and uncertainty adaptation.

Suggested Citation

  • Lu, Yu & Xiang, Yue & Huang, Yuan & Yu, Bin & Weng, Liguo & Liu, Junyong, 2023. "Deep reinforcement learning based optimal scheduling of active distribution system considering distributed generation, energy storage and flexible load," Energy, Elsevier, vol. 271(C).
  • Handle: RePEc:eee:energy:v:271:y:2023:i:c:s0360544223004814
    DOI: 10.1016/j.energy.2023.127087
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0360544223004814
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.energy.2023.127087?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Ifaei, Pouya & Nazari-Heris, Morteza & Tayerani Charmchi, Amir Saman & Asadi, Somayeh & Yoo, ChangKyoo, 2023. "Sustainable energies and machine learning: An organized review of recent applications and challenges," Energy, Elsevier, vol. 266(C).
    2. Xiang, Yue & Zhou, Lili & Huang, Yuan & Zhang, Xin & Liu, Youbo & Liu, Junyong, 2021. "Reactive coordinated optimal operation of distributed wind generation," Energy, Elsevier, vol. 218(C).
    3. Yang, Zhichun & Yang, Fan & Min, Huaidong & Tian, Hao & Hu, Wei & Liu, Jian & Eghbalian, Nasrin, 2023. "Energy management programming to reduce distribution network operating costs in the presence of electric vehicles and renewable energy sources," Energy, Elsevier, vol. 263(PA).
    4. Ma, Wei & Wang, Wei & Chen, Zhe & Wu, Xuezhi & Hu, Ruonan & Tang, Fen & Zhang, Weige, 2021. "Voltage regulation methods for active distribution networks considering the reactive power optimization of substations," Applied Energy, Elsevier, vol. 284(C).
    5. Zhou, Yanting & Ma, Zhongjing & Zhang, Jinhui & Zou, Suli, 2022. "Data-driven stochastic energy management of multi energy system using deep reinforcement learning," Energy, Elsevier, vol. 261(PA).
    6. Yao, Haotian & Xiang, Yue & Liu, Junyong, 2022. "Exploring multiple investment strategies for non-utility-owned DGs: A decentralized risked-based approach," Applied Energy, Elsevier, vol. 326(C).
    7. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    8. Castillo, Victhalia Zapata & Boer, Harmen-Sytze de & Muñoz, Raúl Maícas & Gernaat, David E.H.J. & Benders, René & van Vuuren, Detlef, 2022. "Future global electricity demand load curves," Energy, Elsevier, vol. 258(C).
    9. Esmaeili, Mobin & Sedighizadeh, Mostafa & Esmaili, Masoud, 2016. "Multi-objective optimal reconfiguration and DG (Distributed Generation) power allocation in distribution networks using Big Bang-Big Crunch algorithm considering load uncertainty," Energy, Elsevier, vol. 103(C), pages 86-99.
    10. Gao, Yuan & Matsunami, Yuki & Miyata, Shohei & Akashi, Yasunori, 2022. "Operational optimization for off-grid renewable building energy system using deep reinforcement learning," Applied Energy, Elsevier, vol. 325(C).
    11. Guo, Chenyu & Wang, Xin & Zheng, Yihui & Zhang, Feng, 2022. "Real-time optimal energy management of microgrid with uncertainties based on deep reinforcement learning," Energy, Elsevier, vol. 238(PC).
    12. Sun, Qirun & Wu, Zhi & Gu, Wei & Zhu, Tao & Zhong, Lei & Gao, Ting, 2021. "Flexible expansion planning of distribution system integrating multiple renewable energy sources: An approximate dynamic programming approach," Energy, Elsevier, vol. 226(C).
    13. Tsao, Yu-Chung & Beyene, Tsehaye Dedimas & Thanh, Vo-Van & Gebeyehu, Sisay Geremew & Kuo, Tsai-Chi, 2022. "Power distribution network design considering the distributed generations and differential and dynamic pricing," Energy, Elsevier, vol. 241(C).
    14. Siqin, Zhuoya & Niu, DongXiao & Wang, Xuejie & Zhen, Hao & Li, MingYu & Wang, Jingbo, 2022. "A two-stage distributionally robust optimization model for P2G-CCHP microgrid considering uncertainty and carbon emission," Energy, Elsevier, vol. 260(C).
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Guo, Tianyu & Guo, Qi & Huang, Libin & Guo, Haiping & Lu, Yuanhong & Tu, Liang, 2023. "Microgrid source-network-load-storage master-slave game optimization method considering the energy storage overcharge/overdischarge risk," Energy, Elsevier, vol. 282(C).
    2. Elsisi, Mahmoud & Amer, Mohammed & Dababat, Alya’ & Su, Chun-Lien, 2023. "A comprehensive review of machine learning and IoT solutions for demand side energy management, conservation, and resilient operation," Energy, Elsevier, vol. 281(C).
    3. Weicheng Zhou & Ping Zhao & Yifei Lu, 2023. "Collaborative Optimal Configuration of a Mobile Energy Storage System and a Stationary Energy Storage System to Cope with Regional Grid Blackouts in Extreme Scenarios," Energies, MDPI, vol. 16(23), pages 1-17, December.
    4. Jianxun Luo & Wei Zhang & Hui Wang & Wenmiao Wei & Jinpeng He, 2023. "Research on Data-Driven Optimal Scheduling of Power System," Energies, MDPI, vol. 16(6), pages 1-15, March.
    5. Sicheng Wang & Weiqing Sun, 2023. "Capacity Value Assessment for a Combined Power Plant System of New Energy and Energy Storage Based on Robust Scheduling Rules," Sustainability, MDPI, vol. 15(21), pages 1-19, October.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Wang, Yi & Qiu, Dawei & Sun, Mingyang & Strbac, Goran & Gao, Zhiwei, 2023. "Secure energy management of multi-energy microgrid: A physical-informed safe reinforcement learning approach," Applied Energy, Elsevier, vol. 335(C).
    2. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    3. Pinciroli, Luca & Baraldi, Piero & Compare, Michele & Zio, Enrico, 2023. "Optimal operation and maintenance of energy storage systems in grid-connected microgrids by deep reinforcement learning," Applied Energy, Elsevier, vol. 352(C).
    4. Yin, Linfei & Li, Yu, 2022. "Hybrid multi-agent emotional deep Q network for generation control of multi-area integrated energy systems," Applied Energy, Elsevier, vol. 324(C).
    5. Dimitrios Vamvakas & Panagiotis Michailidis & Christos Korkas & Elias Kosmatopoulos, 2023. "Review and Evaluation of Reinforcement Learning Frameworks on Smart Grid Applications," Energies, MDPI, vol. 16(14), pages 1-38, July.
    6. Wenya Xu & Yanxue Li & Guanjie He & Yang Xu & Weijun Gao, 2023. "Performance Assessment and Comparative Analysis of Photovoltaic-Battery System Scheduling in an Existing Zero-Energy House Based on Reinforcement Learning Control," Energies, MDPI, vol. 16(13), pages 1-19, June.
    7. Gao, Yuan & Matsunami, Yuki & Miyata, Shohei & Akashi, Yasunori, 2022. "Multi-agent reinforcement learning dealing with hybrid action spaces: A case study for off-grid oriented renewable building energy system," Applied Energy, Elsevier, vol. 326(C).
    8. Qiu, Dawei & Wang, Yi & Hua, Weiqi & Strbac, Goran, 2023. "Reinforcement learning for electric vehicle applications in power systems:A critical review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 173(C).
    9. Tulika Saha & Sriparna Saha & Pushpak Bhattacharyya, 2020. "Towards sentiment aided dialogue policy learning for multi-intent conversations using hierarchical reinforcement learning," PLOS ONE, Public Library of Science, vol. 15(7), pages 1-28, July.
    10. Mahmoud Mahfouz & Angelos Filos & Cyrine Chtourou & Joshua Lockhart & Samuel Assefa & Manuela Veloso & Danilo Mandic & Tucker Balch, 2019. "On the Importance of Opponent Modeling in Auction Markets," Papers 1911.12816, arXiv.org.
    11. Imen Azzouz & Wiem Fekih Hassen, 2023. "Optimization of Electric Vehicles Charging Scheduling Based on Deep Reinforcement Learning: A Decentralized Approach," Energies, MDPI, vol. 16(24), pages 1-18, December.
    12. A.S. Jameel Hassan & Umar Marikkar & G.W. Kasun Prabhath & Aranee Balachandran & W.G. Chaminda Bandara & Parakrama B. Ekanayake & Roshan I. Godaliyadda & Janaka B. Ekanayake, 2021. "A Sensitivity Matrix Approach Using Two-Stage Optimization for Voltage Regulation of LV Networks with High PV Penetration," Energies, MDPI, vol. 14(20), pages 1-24, October.
    13. Jacob W. Crandall & Mayada Oudah & Tennom & Fatimah Ishowo-Oloko & Sherief Abdallah & Jean-François Bonnefon & Manuel Cebrian & Azim Shariff & Michael A. Goodrich & Iyad Rahwan, 2018. "Cooperating with machines," Nature Communications, Nature, vol. 9(1), pages 1-12, December.
      • Abdallah, Sherief & Bonnefon, Jean-François & Cebrian, Manuel & Crandall, Jacob W. & Ishowo-Oloko, Fatimah & Oudah, Mayada & Rahwan, Iyad & Shariff, Azim & Tennom,, 2017. "Cooperating with Machines," TSE Working Papers 17-806, Toulouse School of Economics (TSE).
      • Abdallah, Sherief & Bonnefon, Jean-François & Cebrian, Manuel & Crandall, Jacob W. & Ishowo-Oloko, Fatimah & Oudah, Mayada & Rahwan, Iyad & Shariff, Azim & Tennom,, 2017. "Cooperating with Machines," IAST Working Papers 17-68, Institute for Advanced Study in Toulouse (IAST).
      • Jacob Crandall & Mayada Oudah & Fatimah Ishowo-Oloko Tennom & Fatimah Ishowo-Oloko & Sherief Abdallah & Jean-François Bonnefon & Manuel Cebrian & Azim Shariff & Michael Goodrich & Iyad Rahwan, 2018. "Cooperating with machines," Post-Print hal-01897802, HAL.
    14. Sun, Alexander Y., 2020. "Optimal carbon storage reservoir management through deep reinforcement learning," Applied Energy, Elsevier, vol. 278(C).
    15. Yassine Chemingui & Adel Gastli & Omar Ellabban, 2020. "Reinforcement Learning-Based School Energy Management System," Energies, MDPI, vol. 13(23), pages 1-21, December.
    16. Woo Jae Byun & Bumkyu Choi & Seongmin Kim & Joohyun Jo, 2023. "Practical Application of Deep Reinforcement Learning to Optimal Trade Execution," FinTech, MDPI, vol. 2(3), pages 1-16, June.
    17. Yuhong Wang & Lei Chen & Hong Zhou & Xu Zhou & Zongsheng Zheng & Qi Zeng & Li Jiang & Liang Lu, 2021. "Flexible Transmission Network Expansion Planning Based on DQN Algorithm," Energies, MDPI, vol. 14(7), pages 1-21, April.
    18. Nasreddine Belbachir & Mohamed Zellagui & Samir Settoul & Claude Ziad El-Bayeh & Ragab A. El-Sehiemy, 2023. "Multi Dimension-Based Optimal Allocation of Uncertain Renewable Distributed Generation Outputs with Seasonal Source-Load Power Uncertainties in Electrical Distribution Network Using Marine Predator Al," Energies, MDPI, vol. 16(4), pages 1-24, February.
    19. Huang, Ruchen & He, Hongwen & Gao, Miaojue, 2023. "Training-efficient and cost-optimal energy management for fuel cell hybrid electric bus based on a novel distributed deep reinforcement learning framework," Applied Energy, Elsevier, vol. 346(C).
    20. Michelle M. LaMar, 2018. "Markov Decision Process Measurement Model," Psychometrika, Springer;The Psychometric Society, vol. 83(1), pages 67-88, March.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:energy:v:271:y:2023:i:c:s0360544223004814. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.journals.elsevier.com/energy .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.