IDEAS home Printed from https://ideas.repec.org/a/gam/jeners/v17y2024i10p2367-d1394397.html
   My bibliography  Save this article

Real-Time Microgrid Energy Scheduling Using Meta-Reinforcement Learning

Author

Listed:
  • Huan Shen

    (School of Computer Science, Hangzhou Dianzi University, Hangzhou 310018, China)

  • Xingfa Shen

    (School of Computer Science, Hangzhou Dianzi University, Hangzhou 310018, China)

  • Yiming Chen

    (School of Computer Science, Hangzhou Dianzi University, Hangzhou 310018, China)

Abstract

With the rapid development of renewable energy and the increasing maturity of energy storage technology, microgrids are quickly becoming popular worldwide. The stochastic scheduling problem of microgrids can increase operational costs and resource wastage. In order to reduce operational costs and optimize resource utilization efficiency, the real-time scheduling of microgrids becomes particularly important. After collecting extensive data, reinforcement learning (RL) can provide good strategies. However, it cannot make quick and rational decisions in different environments. As a method with generalization ability, meta-learning can compensate for this deficiency. Therefore, this paper introduces a microgrid scheduling strategy based on RL and meta-learning. This method can quickly adapt to different environments with a small amount of training data, enabling rapid energy scheduling policy generation in the early stages of microgrid operation. This paper first establishes a microgrid model, including components such as energy storage, load, and distributed generation (DG). Then, we use a meta-reinforcement learning framework to train the initial scheduling strategy, considering the various operational constraints of the microgrid. The experimental results show that the MAML-based RL strategy has advantages in improving energy utilization and reducing operational costs in the early stages of microgrid operation. This research provides a new intelligent solution for microgrids’ efficient, stable, and economical operation in their initial stages.

Suggested Citation

  • Huan Shen & Xingfa Shen & Yiming Chen, 2024. "Real-Time Microgrid Energy Scheduling Using Meta-Reinforcement Learning," Energies, MDPI, vol. 17(10), pages 1-15, May.
  • Handle: RePEc:gam:jeners:v:17:y:2024:i:10:p:2367-:d:1394397
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1996-1073/17/10/2367/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1996-1073/17/10/2367/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    2. Zia, Muhammad Fahad & Elbouchikhi, Elhoussin & Benbouzid, Mohamed, 2018. "Microgrids energy management systems: A critical review on methods, solutions, and prospects," Applied Energy, Elsevier, vol. 222(C), pages 1033-1055.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Sun, Alexander Y., 2020. "Optimal carbon storage reservoir management through deep reinforcement learning," Applied Energy, Elsevier, vol. 278(C).
    2. Wang, Yi & Qiu, Dawei & Sun, Mingyang & Strbac, Goran & Gao, Zhiwei, 2023. "Secure energy management of multi-energy microgrid: A physical-informed safe reinforcement learning approach," Applied Energy, Elsevier, vol. 335(C).
    3. Pinciroli, Luca & Baraldi, Piero & Compare, Michele & Zio, Enrico, 2023. "Optimal operation and maintenance of energy storage systems in grid-connected microgrids by deep reinforcement learning," Applied Energy, Elsevier, vol. 352(C).
    4. Paiho, Satu & Kiljander, Jussi & Sarala, Roope & Siikavirta, Hanne & Kilkki, Olli & Bajpai, Arpit & Duchon, Markus & Pahl, Marc-Oliver & Wüstrich, Lars & Lübben, Christian & Kirdan, Erkin & Schindler,, 2021. "Towards cross-commodity energy-sharing communities – A review of the market, regulatory, and technical situation," Renewable and Sustainable Energy Reviews, Elsevier, vol. 151(C).
    5. Polimeni, Simone & Moretti, Luca & Martelli, Emanuele & Leva, Sonia & Manzolini, Giampaolo, 2023. "A novel stochastic model for flexible unit commitment of off-grid microgrids," Applied Energy, Elsevier, vol. 331(C).
    6. Gui, Yonghao & Wei, Baoze & Li, Mingshen & Guerrero, Josep M. & Vasquez, Juan C., 2018. "Passivity-based coordinated control for islanded AC microgrid," Applied Energy, Elsevier, vol. 229(C), pages 551-561.
    7. Tulika Saha & Sriparna Saha & Pushpak Bhattacharyya, 2020. "Towards sentiment aided dialogue policy learning for multi-intent conversations using hierarchical reinforcement learning," PLOS ONE, Public Library of Science, vol. 15(7), pages 1-28, July.
    8. Mahmoud Mahfouz & Angelos Filos & Cyrine Chtourou & Joshua Lockhart & Samuel Assefa & Manuela Veloso & Danilo Mandic & Tucker Balch, 2019. "On the Importance of Opponent Modeling in Auction Markets," Papers 1911.12816, arXiv.org.
    9. Imen Azzouz & Wiem Fekih Hassen, 2023. "Optimization of Electric Vehicles Charging Scheduling Based on Deep Reinforcement Learning: A Decentralized Approach," Energies, MDPI, vol. 16(24), pages 1-18, December.
    10. Jacob W. Crandall & Mayada Oudah & Tennom & Fatimah Ishowo-Oloko & Sherief Abdallah & Jean-François Bonnefon & Manuel Cebrian & Azim Shariff & Michael A. Goodrich & Iyad Rahwan, 2018. "Cooperating with machines," Nature Communications, Nature, vol. 9(1), pages 1-12, December.
      • Abdallah, Sherief & Bonnefon, Jean-François & Cebrian, Manuel & Crandall, Jacob W. & Ishowo-Oloko, Fatimah & Oudah, Mayada & Rahwan, Iyad & Shariff, Azim & Tennom,, 2017. "Cooperating with Machines," TSE Working Papers 17-806, Toulouse School of Economics (TSE).
      • Abdallah, Sherief & Bonnefon, Jean-François & Cebrian, Manuel & Crandall, Jacob W. & Ishowo-Oloko, Fatimah & Oudah, Mayada & Rahwan, Iyad & Shariff, Azim & Tennom,, 2017. "Cooperating with Machines," IAST Working Papers 17-68, Institute for Advanced Study in Toulouse (IAST).
      • Jacob Crandall & Mayada Oudah & Fatimah Ishowo-Oloko Tennom & Fatimah Ishowo-Oloko & Sherief Abdallah & Jean-François Bonnefon & Manuel Cebrian & Azim Shariff & Michael Goodrich & Iyad Rahwan, 2018. "Cooperating with machines," Post-Print hal-01897802, HAL.
    11. Yassine Chemingui & Adel Gastli & Omar Ellabban, 2020. "Reinforcement Learning-Based School Energy Management System," Energies, MDPI, vol. 13(23), pages 1-21, December.
    12. Thomas Schmitt & Tobias Rodemann & Jürgen Adamy, 2021. "The Cost of Photovoltaic Forecasting Errors in Microgrid Control with Peak Pricing," Energies, MDPI, vol. 14(9), pages 1-13, April.
    13. Antoine Boche & Clément Foucher & Luiz Fernando Lavado Villa, 2022. "Understanding Microgrid Sustainability: A Systemic and Comprehensive Review," Energies, MDPI, vol. 15(8), pages 1-29, April.
    14. Woo Jae Byun & Bumkyu Choi & Seongmin Kim & Joohyun Jo, 2023. "Practical Application of Deep Reinforcement Learning to Optimal Trade Execution," FinTech, MDPI, vol. 2(3), pages 1-16, June.
    15. Lu, Yu & Xiang, Yue & Huang, Yuan & Yu, Bin & Weng, Liguo & Liu, Junyong, 2023. "Deep reinforcement learning based optimal scheduling of active distribution system considering distributed generation, energy storage and flexible load," Energy, Elsevier, vol. 271(C).
    16. Yuhong Wang & Lei Chen & Hong Zhou & Xu Zhou & Zongsheng Zheng & Qi Zeng & Li Jiang & Liang Lu, 2021. "Flexible Transmission Network Expansion Planning Based on DQN Algorithm," Energies, MDPI, vol. 14(7), pages 1-21, April.
    17. Muhammad Umair Safder & Mohammad J. Sanjari & Ameer Hamza & Rasoul Garmabdari & Md. Alamgir Hossain & Junwei Lu, 2023. "Enhancing Microgrid Stability and Energy Management: Techniques, Challenges, and Future Directions," Energies, MDPI, vol. 16(18), pages 1-28, September.
    18. Huang, Ruchen & He, Hongwen & Gao, Miaojue, 2023. "Training-efficient and cost-optimal energy management for fuel cell hybrid electric bus based on a novel distributed deep reinforcement learning framework," Applied Energy, Elsevier, vol. 346(C).
    19. Michelle M. LaMar, 2018. "Markov Decision Process Measurement Model," Psychometrika, Springer;The Psychometric Society, vol. 83(1), pages 67-88, March.
    20. Zichen Lu & Ying Yan, 2024. "Temperature Control of Fuel Cell Based on PEI-DDPG," Energies, MDPI, vol. 17(7), pages 1-19, April.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jeners:v:17:y:2024:i:10:p:2367-:d:1394397. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.