IDEAS home Printed from https://ideas.repec.org/a/gam/jeners/v14y2021i3p584-d485925.html
   My bibliography  Save this article

Optimal Scheduling of Microgrid Based on Deep Deterministic Policy Gradient and Transfer Learning

Author

Listed:
  • Luqin Fan

    (College of Electrical Engineering, Guizhou University, Guiyang 550025, China)

  • Jing Zhang

    (College of Electrical Engineering, Guizhou University, Guiyang 550025, China)

  • Yu He

    (College of Electrical Engineering, Guizhou University, Guiyang 550025, China)

  • Ying Liu

    (Power Grid Planning Research Center of Guizhou Power Grid Corporation, Guiyang 550002, China)

  • Tao Hu

    (Guizhou Power Grid Corporation, Guiyang 550002, China)

  • Heng Zhang

    (College of Electrical Engineering, Guizhou University, Guiyang 550025, China)

Abstract

Microgrid has flexible composition, a complex operation mechanism, and a large amount of data while operating. However, optimization methods of microgrid scheduling do not effectively accumulate and utilize the scheduling knowledge at present. This paper puts forward a microgrid optimal scheduling method based on Deep Deterministic Policy Gradient (DDPG) and Transfer Learning (TL). This method uses Reinforcement Learning (RL) to learn the scheduling strategy and accumulates the corresponding scheduling knowledge. Meanwhile, the DDPG model is introduced to extend the microgrid scheduling strategy action from the discrete action space to the continuous action space. On this basis, this paper holds that a microgrid optimal scheduling TL algorithm on the strength of the actual supply and demand similarity is proposed with a purpose of making use of the existing scheduling knowledge effectively. The simulation results indicate that this paper can provide optimal scheduling strategy for microgrid with complex operation mechanism flexibly and efficiently through the effective accumulation of scheduling knowledge and the utilization of scheduling knowledge through TL.

Suggested Citation

  • Luqin Fan & Jing Zhang & Yu He & Ying Liu & Tao Hu & Heng Zhang, 2021. "Optimal Scheduling of Microgrid Based on Deep Deterministic Policy Gradient and Transfer Learning," Energies, MDPI, vol. 14(3), pages 1-15, January.
  • Handle: RePEc:gam:jeners:v:14:y:2021:i:3:p:584-:d:485925
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1996-1073/14/3/584/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1996-1073/14/3/584/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    2. Zhang, Xiaoshun & Bao, Tao & Yu, Tao & Yang, Bo & Han, Chuanjia, 2017. "Deep transfer Q-learning with virtual leader-follower for supply-demand Stackelberg game of smart grid," Energy, Elsevier, vol. 133(C), pages 348-365.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Bingyin Lei & Yue Ren & Huiyu Luan & Ruonan Dong & Xiuyuan Wang & Junli Liao & Shu Fang & Kaiye Gao, 2023. "A Review of Optimization for System Reliability of Microgrid," Mathematics, MDPI, vol. 11(4), pages 1-30, February.
    2. Coraci, Davide & Brandi, Silvio & Hong, Tianzhen & Capozzoli, Alfonso, 2023. "Online transfer learning strategy for enhancing the scalability and deployment of deep reinforcement learning control in smart buildings," Applied Energy, Elsevier, vol. 333(C).
    3. Bing Liu & Bowen Xu & Tong He & Wei Yu & Fanghong Guo, 2022. "Hybrid Deep Reinforcement Learning Considering Discrete-Continuous Action Spaces for Real-Time Energy Management in More Electric Aircraft," Energies, MDPI, vol. 15(17), pages 1-21, August.
    4. Ying Ji & Jianhui Wang & Jiacan Xu & Donglin Li, 2021. "Data-Driven Online Energy Scheduling of a Microgrid Based on Deep Reinforcement Learning," Energies, MDPI, vol. 14(8), pages 1-19, April.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Gao, Yuan & Matsunami, Yuki & Miyata, Shohei & Akashi, Yasunori, 2022. "Operational optimization for off-grid renewable building energy system using deep reinforcement learning," Applied Energy, Elsevier, vol. 325(C).
    2. Gao, Yuan & Matsunami, Yuki & Miyata, Shohei & Akashi, Yasunori, 2022. "Multi-agent reinforcement learning dealing with hybrid action spaces: A case study for off-grid oriented renewable building energy system," Applied Energy, Elsevier, vol. 326(C).
    3. Tulika Saha & Sriparna Saha & Pushpak Bhattacharyya, 2020. "Towards sentiment aided dialogue policy learning for multi-intent conversations using hierarchical reinforcement learning," PLOS ONE, Public Library of Science, vol. 15(7), pages 1-28, July.
    4. Mahmoud Mahfouz & Angelos Filos & Cyrine Chtourou & Joshua Lockhart & Samuel Assefa & Manuela Veloso & Danilo Mandic & Tucker Balch, 2019. "On the Importance of Opponent Modeling in Auction Markets," Papers 1911.12816, arXiv.org.
    5. Woo Jae Byun & Bumkyu Choi & Seongmin Kim & Joohyun Jo, 2023. "Practical Application of Deep Reinforcement Learning to Optimal Trade Execution," FinTech, MDPI, vol. 2(3), pages 1-16, June.
    6. Lu, Yu & Xiang, Yue & Huang, Yuan & Yu, Bin & Weng, Liguo & Liu, Junyong, 2023. "Deep reinforcement learning based optimal scheduling of active distribution system considering distributed generation, energy storage and flexible load," Energy, Elsevier, vol. 271(C).
    7. Yuhong Wang & Lei Chen & Hong Zhou & Xu Zhou & Zongsheng Zheng & Qi Zeng & Li Jiang & Liang Lu, 2021. "Flexible Transmission Network Expansion Planning Based on DQN Algorithm," Energies, MDPI, vol. 14(7), pages 1-21, April.
    8. Michelle M. LaMar, 2018. "Markov Decision Process Measurement Model," Psychometrika, Springer;The Psychometric Society, vol. 83(1), pages 67-88, March.
    9. Yang, Ting & Zhao, Liyuan & Li, Wei & Zomaya, Albert Y., 2021. "Dynamic energy dispatch strategy for integrated energy system based on improved deep reinforcement learning," Energy, Elsevier, vol. 235(C).
    10. Wang, Yi & Qiu, Dawei & Sun, Mingyang & Strbac, Goran & Gao, Zhiwei, 2023. "Secure energy management of multi-energy microgrid: A physical-informed safe reinforcement learning approach," Applied Energy, Elsevier, vol. 335(C).
    11. Neha Soni & Enakshi Khular Sharma & Narotam Singh & Amita Kapoor, 2019. "Impact of Artificial Intelligence on Businesses: from Research, Innovation, Market Deployment to Future Shifts in Business Models," Papers 1905.02092, arXiv.org.
    12. Ande Chang & Yuting Ji & Chunguang Wang & Yiming Bie, 2024. "CVDMARL: A Communication-Enhanced Value Decomposition Multi-Agent Reinforcement Learning Traffic Signal Control Method," Sustainability, MDPI, vol. 16(5), pages 1-17, March.
    13. Sun, Hongchang & Niu, Yanlei & Li, Chengdong & Zhou, Changgeng & Zhai, Wenwen & Chen, Zhe & Wu, Hao & Niu, Lanqiang, 2022. "Energy consumption optimization of building air conditioning system via combining the parallel temporal convolutional neural network and adaptive opposition-learning chimp algorithm," Energy, Elsevier, vol. 259(C).
    14. Zhang, Yang & Yang, Qingyu & Li, Donghe & An, Dou, 2022. "A reinforcement and imitation learning method for pricing strategy of electricity retailer with customers’ flexibility," Applied Energy, Elsevier, vol. 323(C).
    15. He, Jing & Liu, Xinglu & Duan, Qiyao & Chan, Wai Kin (Victor) & Qi, Mingyao, 2023. "Reinforcement learning for multi-item retrieval in the puzzle-based storage system," European Journal of Operational Research, Elsevier, vol. 305(2), pages 820-837.
    16. Holger Mohr & Katharina Zwosta & Dimitrije Markovic & Sebastian Bitzer & Uta Wolfensteller & Hannes Ruge, 2018. "Deterministic response strategies in a trial-and-error learning task," PLOS Computational Biology, Public Library of Science, vol. 14(11), pages 1-19, November.
    17. Taejong Joo & Hyunyoung Jun & Dongmin Shin, 2022. "Task Allocation in Human–Machine Manufacturing Systems Using Deep Reinforcement Learning," Sustainability, MDPI, vol. 14(4), pages 1-18, February.
    18. Sebastian Jaimungal, 2022. "Reinforcement learning and stochastic optimisation," Finance and Stochastics, Springer, vol. 26(1), pages 103-129, January.
    19. Timotei Lala & Darius-Pavel Chirla & Mircea-Bogdan Radac, 2021. "Model Reference Tracking Control Solutions for a Visual Servo System Based on a Virtual State from Unknown Dynamics," Energies, MDPI, vol. 15(1), pages 1-25, December.
    20. Oleh Lukianykhin & Tetiana Bogodorova, 2021. "Voltage Control-Based Ancillary Service Using Deep Reinforcement Learning," Energies, MDPI, vol. 14(8), pages 1-22, April.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jeners:v:14:y:2021:i:3:p:584-:d:485925. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.