IDEAS home Printed from https://ideas.repec.org/a/eee/rensus/v191y2024ics1364032123009267.html
   My bibliography  Save this article

Long-term microgrid expansion planning with resilience and environmental benefits using deep reinforcement learning

Author

Listed:
  • Pang, Kexin
  • Zhou, Jian
  • Tsianikas, Stamatis
  • Coit, David W.
  • Ma, Yizhong

Abstract

Microgrid plays an increasingly important role to enhance power resilience and environmental protection regarding greenhouse gas emission reduction through the widespread applications of distributed and renewable energy. Because of the steady growth of load demand, the strict power resilience requirements and the pressing need of carbon emission reduction, microgrid expansion planning considering those factors has become a currently topical topic. In this study, a new framework for long-term microgrid expansion planning, in which a microgrid serves as a backup power system in the event of main grid outages from the perspectives of economy, resilience and greenhouse gas emission, is proposed. Deep reinforcement learning method is used to solve this dynamic and stochastic optimization problem by taking into account various uncertainties and constraints for the long-range planning. Case studies of 20-year microgrid expansion planning using actual data are conducted. The simulation results demonstrate the effectiveness of the proposed framework on reducing greenhouse gas emissions and total cost including economic losses resulting from power grid outages, investment and operating cost of microgrid entities. In addition, the impact of customer load demand and microgrid entities price on optimal planning policies is discussed. The results demonstrate that microgrid expansion planning can be effectively adapted to different levels of load demand and different scenarios of price changes under the proposed framework. This work is helpful for decision makers to implement cost-effective and power resilient microgrid expansion planning with greenhouse gas emission reduction benefits in the long term.

Suggested Citation

  • Pang, Kexin & Zhou, Jian & Tsianikas, Stamatis & Coit, David W. & Ma, Yizhong, 2024. "Long-term microgrid expansion planning with resilience and environmental benefits using deep reinforcement learning," Renewable and Sustainable Energy Reviews, Elsevier, vol. 191(C).
  • Handle: RePEc:eee:rensus:v:191:y:2024:i:c:s1364032123009267
    DOI: 10.1016/j.rser.2023.114068
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S1364032123009267
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.rser.2023.114068?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Tsianikas, Stamatis & Yousefi, Nooshin & Zhou, Jian & Rodgers, Mark D. & Coit, David, 2021. "A storage expansion planning framework using reinforcement learning and simulation-based optimization," Applied Energy, Elsevier, vol. 290(C).
    2. Sandelic, Monika & Peyghami, Saeed & Sangwongwanich, Ariya & Blaabjerg, Frede, 2022. "Reliability aspects in microgrid design and planning: Status and power electronics-induced challenges," Renewable and Sustainable Energy Reviews, Elsevier, vol. 159(C).
    3. Feijoo, Felipe & Das, Tapas K., 2015. "Emissions control via carbon policies and microgrid generation: A bilevel model and Pareto analysis," Energy, Elsevier, vol. 90(P2), pages 1545-1555.
    4. Ganesh, Akhil Hannegudda & Xu, Bin, 2022. "A review of reinforcement learning based energy management systems for electrified powertrains: Progress, challenge, and potential solution," Renewable and Sustainable Energy Reviews, Elsevier, vol. 154(C).
    5. Hemmati, Reza & Saboori, Hedayat & Siano, Pierluigi, 2017. "Coordinated short-term scheduling and long-term expansion planning in microgrids incorporating renewable energy resources and energy storage systems," Energy, Elsevier, vol. 134(C), pages 699-708.
    6. Wu, Jingda & He, Hongwen & Peng, Jiankun & Li, Yuecheng & Li, Zhanjiang, 2018. "Continuous reinforcement learning of energy management with deep Q network for a power split hybrid electric bus," Applied Energy, Elsevier, vol. 222(C), pages 799-811.
    7. Zhou, Jian & Tsianikas, Stamatis & Birnie, Dunbar P. & Coit, David W., 2019. "Economic and resilience benefit analysis of incorporating battery storage to photovoltaic array generation," Renewable Energy, Elsevier, vol. 135(C), pages 652-662.
    8. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Daniel Egan & Qilun Zhu & Robert Prucka, 2023. "A Review of Reinforcement Learning-Based Powertrain Controllers: Effects of Agent Selection for Mixed-Continuity Control and Reward Formulation," Energies, MDPI, vol. 16(8), pages 1-31, April.
    2. Matteo Acquarone & Claudio Maino & Daniela Misul & Ezio Spessa & Antonio Mastropietro & Luca Sorrentino & Enrico Busto, 2023. "Influence of the Reward Function on the Selection of Reinforcement Learning Agents for Hybrid Electric Vehicles Real-Time Control," Energies, MDPI, vol. 16(6), pages 1-22, March.
    3. Qi, Chunyang & Zhu, Yiwen & Song, Chuanxue & Yan, Guangfu & Xiao, Feng & Da wang, & Zhang, Xu & Cao, Jingwei & Song, Shixin, 2022. "Hierarchical reinforcement learning based energy management strategy for hybrid electric vehicle," Energy, Elsevier, vol. 238(PA).
    4. Zhengyu Yao & Hwan-Sik Yoon & Yang-Ki Hong, 2023. "Control of Hybrid Electric Vehicle Powertrain Using Offline-Online Hybrid Reinforcement Learning," Energies, MDPI, vol. 16(2), pages 1-18, January.
    5. Wu, Jingda & Huang, Chao & He, Hongwen & Huang, Hailong, 2024. "Confidence-aware reinforcement learning for energy management of electrified vehicles," Renewable and Sustainable Energy Reviews, Elsevier, vol. 191(C).
    6. Dong, Peng & Zhao, Junwei & Liu, Xuewu & Wu, Jian & Xu, Xiangyang & Liu, Yanfang & Wang, Shuhan & Guo, Wei, 2022. "Practical application of energy management strategy for hybrid electric vehicles based on intelligent and connected technologies: Development stages, challenges, and future trends," Renewable and Sustainable Energy Reviews, Elsevier, vol. 170(C).
    7. Hu, Dong & Xie, Hui & Song, Kang & Zhang, Yuanyuan & Yan, Long, 2023. "An apprenticeship-reinforcement learning scheme based on expert demonstrations for energy management strategy of hybrid electric vehicles," Applied Energy, Elsevier, vol. 342(C).
    8. Kunyu Wang & Rong Yang & Yongjian Zhou & Wei Huang & Song Zhang, 2022. "Design and Improvement of SD3-Based Energy Management Strategy for a Hybrid Electric Urban Bus," Energies, MDPI, vol. 15(16), pages 1-21, August.
    9. Gao, Qinxiang & Lei, Tao & Yao, Wenli & Zhang, Xingyu & Zhang, Xiaobin, 2023. "A health-aware energy management strategy for fuel cell hybrid electric UAVs based on safe reinforcement learning," Energy, Elsevier, vol. 283(C).
    10. Zhang, Bin & Hu, Weihao & Xu, Xiao & Li, Tao & Zhang, Zhenyuan & Chen, Zhe, 2022. "Physical-model-free intelligent energy management for a grid-connected hybrid wind-microturbine-PV-EV energy system via deep reinforcement learning approach," Renewable Energy, Elsevier, vol. 200(C), pages 433-448.
    11. Wei, Hongqian & Zhang, Nan & Liang, Jun & Ai, Qiang & Zhao, Wenqiang & Huang, Tianyi & Zhang, Youtong, 2022. "Deep reinforcement learning based direct torque control strategy for distributed drive electric vehicles considering active safety and energy saving performance," Energy, Elsevier, vol. 238(PB).
    12. Xu, Bin & Rathod, Dhruvang & Zhang, Darui & Yebi, Adamu & Zhang, Xueyu & Li, Xiaoya & Filipi, Zoran, 2020. "Parametric study on reinforcement learning optimized energy management strategy for a hybrid electric vehicle," Applied Energy, Elsevier, vol. 259(C).
    13. Tang, Xiaolin & Zhou, Haitao & Wang, Feng & Wang, Weida & Lin, Xianke, 2022. "Longevity-conscious energy management strategy of fuel cell hybrid electric Vehicle Based on deep reinforcement learning," Energy, Elsevier, vol. 238(PA).
    14. Sun, Alexander Y., 2020. "Optimal carbon storage reservoir management through deep reinforcement learning," Applied Energy, Elsevier, vol. 278(C).
    15. Huang, Ruchen & He, Hongwen & Gao, Miaojue, 2023. "Training-efficient and cost-optimal energy management for fuel cell hybrid electric bus based on a novel distributed deep reinforcement learning framework," Applied Energy, Elsevier, vol. 346(C).
    16. Wu, Yuankai & Tan, Huachun & Peng, Jiankun & Zhang, Hailong & He, Hongwen, 2019. "Deep reinforcement learning of energy management with continuous control strategy and traffic information for a series-parallel plug-in hybrid electric bus," Applied Energy, Elsevier, vol. 247(C), pages 454-466.
    17. Chen, Mengting & Cui, Yuanlai & Wang, Xiaonan & Xie, Hengwang & Liu, Fangping & Luo, Tongyuan & Zheng, Shizong & Luo, Yufeng, 2021. "A reinforcement learning approach to irrigation decision-making for rice using weather forecasts," Agricultural Water Management, Elsevier, vol. 250(C).
    18. Feng, Zhiyan & Zhang, Qingang & Zhang, Yiming & Fei, Liangyu & Jiang, Fei & Zhao, Shengdun, 2024. "Practicability analysis of online deep reinforcement learning towards energy management strategy of 4WD-BEVs driven by dual-motor in-wheel motors," Energy, Elsevier, vol. 290(C).
    19. Lu, Renzhi & Hong, Seung Ho, 2019. "Incentive-based demand response for smart grid with reinforcement learning and deep neural network," Applied Energy, Elsevier, vol. 236(C), pages 937-949.
    20. Tulika Saha & Sriparna Saha & Pushpak Bhattacharyya, 2020. "Towards sentiment aided dialogue policy learning for multi-intent conversations using hierarchical reinforcement learning," PLOS ONE, Public Library of Science, vol. 15(7), pages 1-28, July.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:rensus:v:191:y:2024:i:c:s1364032123009267. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/600126/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.