IDEAS home Printed from https://ideas.repec.org/a/gam/jeners/v16y2023i6p2926-d1104578.html
   My bibliography  Save this article

Research on Data-Driven Optimal Scheduling of Power System

Author

Listed:
  • Jianxun Luo

    (School of Information and Automation, Qilu University of Technology (Shandong Academy of Sciences), Jinan 250353, China)

  • Wei Zhang

    (School of Information and Automation, Qilu University of Technology (Shandong Academy of Sciences), Jinan 250353, China)

  • Hui Wang

    (Department of Electrical Engineering, Shandong University, Jinan 250061, China)

  • Wenmiao Wei

    (Automation Academy, Huazhong University of Science and Technology, Wuhan 430074, China)

  • Jinpeng He

    (School of Information and Automation, Qilu University of Technology (Shandong Academy of Sciences), Jinan 250353, China)

Abstract

The uncertainty of output makes it difficult to effectively solve the economic security dispatching problem of the power grid when a high proportion of renewable energy generating units are integrated into the power grid. Based on the proximal policy optimization (PPO) algorithm, a safe and economical grid scheduling method is designed. First, constraints on the safe and economical operation of renewable energy power systems are defined. Then, the quintuple of Markov decision process is defined under the framework of deep reinforcement learning, and the dispatching optimization problem is transformed into Markov decision process. To solve the problem of low sample data utilization in online reinforcement learning strategies, a PPO optimization algorithm based on the Kullback–Leibler (KL) divergence penalty factor and importance sampling technique is proposed, which transforms on-policy into off-policy and improves sample utilization. Finally, the simulation analysis of the example shows that in a power system with a high proportion of renewable energy generating units connected to the grid, the proposed scheduling strategy can meet the load demand under different load trends. In the dispatch cycle with different renewable energy generation rates, renewable energy can be absorbed to the maximum extent to ensure the safe and economic operation of the grid.

Suggested Citation

  • Jianxun Luo & Wei Zhang & Hui Wang & Wenmiao Wei & Jinpeng He, 2023. "Research on Data-Driven Optimal Scheduling of Power System," Energies, MDPI, vol. 16(6), pages 1-15, March.
  • Handle: RePEc:gam:jeners:v:16:y:2023:i:6:p:2926-:d:1104578
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1996-1073/16/6/2926/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1996-1073/16/6/2926/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Xiang, Yue & Lu, Yu & Liu, Junyong, 2023. "Deep reinforcement learning based topology-aware voltage regulation of distribution networks with distributed energy storage," Applied Energy, Elsevier, vol. 332(C).
    2. Lu, Yu & Xiang, Yue & Huang, Yuan & Yu, Bin & Weng, Liguo & Liu, Junyong, 2023. "Deep reinforcement learning based optimal scheduling of active distribution system considering distributed generation, energy storage and flexible load," Energy, Elsevier, vol. 271(C).
    3. White, Chelsea C. & White, Douglas J., 1989. "Markov decision processes," European Journal of Operational Research, Elsevier, vol. 39(1), pages 1-16, March.
    4. Perera, A.T.D. & Kamalaruban, Parameswaran, 2021. "Applications of reinforcement learning in energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 137(C).
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Yao, Ganzhou & Luo, Zirong & Lu, Zhongyue & Wang, Mangkuan & Shang, Jianzhong & Guerrerob, Josep M., 2023. "Unlocking the potential of wave energy conversion: A comprehensive evaluation of advanced maximum power point tracking techniques and hybrid strategies for sustainable energy harvesting," Renewable and Sustainable Energy Reviews, Elsevier, vol. 185(C).
    2. Guo, Tianyu & Guo, Qi & Huang, Libin & Guo, Haiping & Lu, Yuanhong & Tu, Liang, 2023. "Microgrid source-network-load-storage master-slave game optimization method considering the energy storage overcharge/overdischarge risk," Energy, Elsevier, vol. 282(C).
    3. Eike Nohdurft & Elisa Long & Stefan Spinler, 2017. "Was Angelina Jolie Right? Optimizing Cancer Prevention Strategies Among BRCA Mutation Carriers," Decision Analysis, INFORMS, vol. 14(3), pages 139-169, September.
    4. Zhu, Xingxu & Hou, Xiangchen & Li, Junhui & Yan, Gangui & Li, Cuiping & Wang, Dongbo, 2023. "Distributed online prediction optimization algorithm for distributed energy resources considering the multi-periods optimal operation," Applied Energy, Elsevier, vol. 348(C).
    5. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    6. Zhao, Yincheng & Zhang, Guozhou & Hu, Weihao & Huang, Qi & Chen, Zhe & Blaabjerg, Frede, 2023. "Meta-learning based voltage control strategy for emergency faults of active distribution networks," Applied Energy, Elsevier, vol. 349(C).
    7. Ahmad, Tanveer & Madonski, Rafal & Zhang, Dongdong & Huang, Chao & Mujeeb, Asad, 2022. "Data-driven probabilistic machine learning in sustainable smart energy/smart energy systems: Key developments, challenges, and future research opportunities in the context of smart grid paradigm," Renewable and Sustainable Energy Reviews, Elsevier, vol. 160(C).
    8. Yanling Chang & Alan Erera & Chelsea White, 2015. "Value of information for a leader–follower partially observed Markov game," Annals of Operations Research, Springer, vol. 235(1), pages 129-153, December.
    9. Li, Yanxue & Wang, Zixuan & Xu, Wenya & Gao, Weijun & Xu, Yang & Xiao, Fu, 2023. "Modeling and energy dynamic control for a ZEH via hybrid model-based deep reinforcement learning," Energy, Elsevier, vol. 277(C).
    10. Harrold, Daniel J.B. & Cao, Jun & Fan, Zhong, 2022. "Data-driven battery operation for energy arbitrage using rainbow deep reinforcement learning," Energy, Elsevier, vol. 238(PC).
    11. Zhu, Ziqing & Hu, Ze & Chan, Ka Wing & Bu, Siqi & Zhou, Bin & Xia, Shiwei, 2023. "Reinforcement learning in deregulated energy market: A comprehensive review," Applied Energy, Elsevier, vol. 329(C).
    12. Harrold, Daniel J.B. & Cao, Jun & Fan, Zhong, 2022. "Renewable energy integration and microgrid energy trading using multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 318(C).
    13. Zong-Zhi Lin & James C. Bean & Chelsea C. White, 2004. "A Hybrid Genetic/Optimization Algorithm for Finite-Horizon, Partially Observed Markov Decision Processes," INFORMS Journal on Computing, INFORMS, vol. 16(1), pages 27-38, February.
    14. Zhong, Shengyuan & Wang, Xiaoyuan & Zhao, Jun & Li, Wenjia & Li, Hao & Wang, Yongzhen & Deng, Shuai & Zhu, Jiebei, 2021. "Deep reinforcement learning framework for dynamic pricing demand response of regenerative electric heating," Applied Energy, Elsevier, vol. 288(C).
    15. Nebiyu Kedir & Phuong H. D. Nguyen & Citlaly Pérez & Pedro Ponce & Aminah Robinson Fayek, 2023. "Systematic Literature Review on Fuzzy Hybrid Methods in Photovoltaic Solar Energy: Opportunities, Challenges, and Guidance for Implementation," Energies, MDPI, vol. 16(9), pages 1-38, April.
    16. Yanling Chang & Alan Erera & Chelsea White, 2015. "A leader–follower partially observed, multiobjective Markov game," Annals of Operations Research, Springer, vol. 235(1), pages 103-128, December.
    17. Hao Zhang, 2010. "Partially Observable Markov Decision Processes: A Geometric Technique and Analysis," Operations Research, INFORMS, vol. 58(1), pages 214-228, February.
    18. Chernonog, Tatyana & Avinadav, Tal, 2016. "A two-state partially observable Markov decision process with three actionsAuthor-Name: Ben-Zvi, Tal," European Journal of Operational Research, Elsevier, vol. 254(3), pages 957-967.
    19. Touzani, Samir & Prakash, Anand Krishnan & Wang, Zhe & Agarwal, Shreya & Pritoni, Marco & Kiran, Mariam & Brown, Richard & Granderson, Jessica, 2021. "Controlling distributed energy resources via deep reinforcement learning for load flexibility and energy efficiency," Applied Energy, Elsevier, vol. 304(C).
    20. Bio Gassi, Karim & Baysal, Mustafa, 2023. "Improving real-time energy decision-making model with an actor-critic agent in modern microgrids with energy storage devices," Energy, Elsevier, vol. 263(PE).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jeners:v:16:y:2023:i:6:p:2926-:d:1104578. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.