IDEAS home Printed from https://ideas.repec.org/a/gam/jeners/v14y2021i18p5688-d632482.html
   My bibliography  Save this article

Reinforcement Learning for Energy-Storage Systems in Grid-Connected Microgrids: An Investigation of Online vs. Offline Implementation

Author

Listed:
  • Khawaja Haider Ali

    (Penryn Campus, College of Engineering, Mathematics and Physical Sciences, University of Exeter, Cornwall TR10 9FE, UK
    Department of Electrical Engineering, Sukkur IBA University, Sukkur 65200, Pakistan)

  • Marvin Sigalo

    (Penryn Campus, College of Engineering, Mathematics and Physical Sciences, University of Exeter, Cornwall TR10 9FE, UK)

  • Saptarshi Das

    (Penryn Campus, College of Engineering, Mathematics and Physical Sciences, University of Exeter, Cornwall TR10 9FE, UK)

  • Enrico Anderlini

    (Department of Mechanical Engineering, Roberts Building, University College London, London WC1E 7JE, UK)

  • Asif Ali Tahir

    (Penryn Campus, College of Engineering, Mathematics and Physical Sciences, University of Exeter, Cornwall TR10 9FE, UK)

  • Mohammad Abusara

    (Penryn Campus, College of Engineering, Mathematics and Physical Sciences, University of Exeter, Cornwall TR10 9FE, UK)

Abstract

Grid-connected microgrids consisting of renewable energy sources, battery storage, and load require an appropriate energy management system that controls the battery operation. Traditionally, the operation of the battery is optimised using 24 h of forecasted data of load demand and renewable energy sources (RES) generation using offline optimisation techniques, where the battery actions (charge/discharge/idle) are determined before the start of the day. Reinforcement Learning (RL) has recently been suggested as an alternative to these traditional techniques due to its ability to learn optimal policy online using real data. Two approaches of RL have been suggested in the literature viz. offline and online. In offline RL, the agent learns the optimum policy using predicted generation and load data. Once convergence is achieved, battery commands are dispatched in real time. This method is similar to traditional methods because it relies on forecasted data. In online RL, on the other hand, the agent learns the optimum policy by interacting with the system in real time using real data. This paper investigates the effectiveness of both the approaches. White Gaussian noise with different standard deviations was added to real data to create synthetic predicted data to validate the method. In the first approach, the predicted data were used by an offline RL algorithm. In the second approach, the online RL algorithm interacted with real streaming data in real time, and the agent was trained using real data. When the energy costs of the two approaches were compared, it was found that the online RL provides better results than the offline approach if the difference between real and predicted data is greater than 1.6%.

Suggested Citation

  • Khawaja Haider Ali & Marvin Sigalo & Saptarshi Das & Enrico Anderlini & Asif Ali Tahir & Mohammad Abusara, 2021. "Reinforcement Learning for Energy-Storage Systems in Grid-Connected Microgrids: An Investigation of Online vs. Offline Implementation," Energies, MDPI, vol. 14(18), pages 1-18, September.
  • Handle: RePEc:gam:jeners:v:14:y:2021:i:18:p:5688-:d:632482
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1996-1073/14/18/5688/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1996-1073/14/18/5688/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Voyant, Cyril & Notton, Gilles & Kalogirou, Soteris & Nivet, Marie-Laure & Paoli, Christophe & Motte, Fabrice & Fouilloy, Alexis, 2017. "Machine learning methods for solar radiation forecasting: A review," Renewable Energy, Elsevier, vol. 105(C), pages 569-582.
    2. Alberto Dolara & Francesco Grimaccia & Giulia Magistrati & Gabriele Marchegiani, 2017. "Optimization Models for Islanded Micro-Grids: A Comparative Analysis between Linear Programming and Mixed Integer Programming," Energies, MDPI, vol. 10(2), pages 1-20, February.
    3. Cosic, Armin & Stadler, Michael & Mansoor, Muhammad & Zellinger, Michael, 2021. "Mixed-integer linear programming based optimization strategies for renewable energy communities," Energy, Elsevier, vol. 237(C).
    4. Sunyong Kim & Hyuk Lim, 2018. "Reinforcement Learning Based Energy Management Algorithm for Smart Energy Buildings," Energies, MDPI, vol. 11(8), pages 1-19, August.
    5. Brida V. Mbuwir & Frederik Ruelens & Fred Spiessens & Geert Deconinck, 2017. "Battery Energy Management in a Microgrid Using Batch Reinforcement Learning," Energies, MDPI, vol. 10(11), pages 1-19, November.
    6. Van-Hai Bui & Akhtar Hussain & Hak-Man Kim, 2019. "Q-Learning-Based Operation Strategy for Community Battery Energy Storage System (CBESS) in Microgrid System," Energies, MDPI, vol. 12(9), pages 1-17, May.
    7. Kuznetsova, Elizaveta & Li, Yan-Fu & Ruiz, Carlos & Zio, Enrico & Ault, Graham & Bell, Keith, 2013. "Reinforcement learning for microgrid energy management," Energy, Elsevier, vol. 59(C), pages 133-146.
    8. Benalcazar, Pablo, 2021. "Optimal sizing of thermal energy storage systems for CHP plants considering specific investment costs: A case study," Energy, Elsevier, vol. 234(C).
    9. Ying Ji & Jianhui Wang & Jiacan Xu & Xiaoke Fang & Huaguang Zhang, 2019. "Real-Time Energy Management of a Microgrid Using Deep Reinforcement Learning," Energies, MDPI, vol. 12(12), pages 1-21, June.
    10. Kushwaha, Vishal & Pindoriya, Naran M., 2019. "A SARIMA-RVFL hybrid model assisted by wavelet decomposition for very short-term solar PV power generation forecast," Renewable Energy, Elsevier, vol. 140(C), pages 124-139.
    11. Chen, Yen-Haw & Lu, Su-Ying & Chang, Yung-Ruei & Lee, Ta-Tung & Hu, Ming-Che, 2013. "Economic analysis and optimal energy management models for microgrid systems: A case study in Taiwan," Applied Energy, Elsevier, vol. 103(C), pages 145-154.
    12. Ibrahim Salem Jahan & Vaclav Snasel & Stanislav Misak, 2020. "Intelligent Systems for Power Load Forecasting: A Study Review," Energies, MDPI, vol. 13(22), pages 1-12, November.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Kapil Deshpande & Philipp Möhl & Alexander Hämmerle & Georg Weichhart & Helmut Zörrer & Andreas Pichler, 2022. "Energy Management Simulation with Multi-Agent Reinforcement Learning: An Approach to Achieve Reliability and Resilience," Energies, MDPI, vol. 15(19), pages 1-35, October.
    2. Marvin B. Sigalo & Saptarshi Das & Ajit C. Pillai & Mohammad Abusara, 2023. "Real-Time Economic Dispatch of CHP Systems with Battery Energy Storage for Behind-the-Meter Applications," Energies, MDPI, vol. 16(3), pages 1-20, January.
    3. Khawaja Haider Ali & Mohammad Abusara & Asif Ali Tahir & Saptarshi Das, 2023. "Dual-Layer Q-Learning Strategy for Energy Management of Battery Storage in Grid-Connected Microgrids," Energies, MDPI, vol. 16(3), pages 1-17, January.
    4. Anis ur Rehman & Muhammad Ali & Sheeraz Iqbal & Aqib Shafiq & Nasim Ullah & Sattam Al Otaibi, 2022. "Artificial Intelligence-Based Control and Coordination of Multiple PV Inverters for Reactive Power/Voltage Control of Power Distribution Networks," Energies, MDPI, vol. 15(17), pages 1-13, August.
    5. Pinciroli, Luca & Baraldi, Piero & Compare, Michele & Zio, Enrico, 2023. "Optimal operation and maintenance of energy storage systems in grid-connected microgrids by deep reinforcement learning," Applied Energy, Elsevier, vol. 352(C).

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Lilia Tightiz & Joon Yoo, 2022. "A Review on a Data-Driven Microgrid Management System Integrating an Active Distribution Network: Challenges, Issues, and New Trends," Energies, MDPI, vol. 15(22), pages 1-24, November.
    2. Grace Muriithi & Sunetra Chowdhury, 2021. "Optimal Energy Management of a Grid-Tied Solar PV-Battery Microgrid: A Reinforcement Learning Approach," Energies, MDPI, vol. 14(9), pages 1-24, May.
    3. Zhu, Ziqing & Hu, Ze & Chan, Ka Wing & Bu, Siqi & Zhou, Bin & Xia, Shiwei, 2023. "Reinforcement learning in deregulated energy market: A comprehensive review," Applied Energy, Elsevier, vol. 329(C).
    4. Bio Gassi, Karim & Baysal, Mustafa, 2023. "Improving real-time energy decision-making model with an actor-critic agent in modern microgrids with energy storage devices," Energy, Elsevier, vol. 263(PE).
    5. Dimitrios Vamvakas & Panagiotis Michailidis & Christos Korkas & Elias Kosmatopoulos, 2023. "Review and Evaluation of Reinforcement Learning Frameworks on Smart Grid Applications," Energies, MDPI, vol. 16(14), pages 1-38, July.
    6. Juan D. Velásquez & Lorena Cadavid & Carlos J. Franco, 2023. "Intelligence Techniques in Sustainable Energy: Analysis of a Decade of Advances," Energies, MDPI, vol. 16(19), pages 1-45, October.
    7. Amrutha Raju Battula & Sandeep Vuddanti & Surender Reddy Salkuti, 2021. "Review of Energy Management System Approaches in Microgrids," Energies, MDPI, vol. 14(17), pages 1-32, September.
    8. Khawaja Haider Ali & Mohammad Abusara & Asif Ali Tahir & Saptarshi Das, 2023. "Dual-Layer Q-Learning Strategy for Energy Management of Battery Storage in Grid-Connected Microgrids," Energies, MDPI, vol. 16(3), pages 1-17, January.
    9. Ritu Kandari & Neeraj Neeraj & Alexander Micallef, 2022. "Review on Recent Strategies for Integrating Energy Storage Systems in Microgrids," Energies, MDPI, vol. 16(1), pages 1-24, December.
    10. Van-Hai Bui & Akhtar Hussain & Hak-Man Kim, 2019. "Q-Learning-Based Operation Strategy for Community Battery Energy Storage System (CBESS) in Microgrid System," Energies, MDPI, vol. 12(9), pages 1-17, May.
    11. Harri Aaltonen & Seppo Sierla & Rakshith Subramanya & Valeriy Vyatkin, 2021. "A Simulation Environment for Training a Reinforcement Learning Agent Trading a Battery Storage," Energies, MDPI, vol. 14(17), pages 1-20, September.
    12. Alqahtani, Mohammed & Hu, Mengqi, 2022. "Dynamic energy scheduling and routing of multiple electric vehicles using deep reinforcement learning," Energy, Elsevier, vol. 244(PA).
    13. Yang, Ting & Zhao, Liyuan & Li, Wei & Zomaya, Albert Y., 2021. "Dynamic energy dispatch strategy for integrated energy system based on improved deep reinforcement learning," Energy, Elsevier, vol. 235(C).
    14. Chen, Pengzhan & Liu, Mengchao & Chen, Chuanxi & Shang, Xin, 2019. "A battery management strategy in microgrid for personalized customer requirements," Energy, Elsevier, vol. 189(C).
    15. Sabarathinam Srinivasan & Suresh Kumarasamy & Zacharias E. Andreadakis & Pedro G. Lind, 2023. "Artificial Intelligence and Mathematical Models of Power Grids Driven by Renewable Energy Sources: A Survey," Energies, MDPI, vol. 16(14), pages 1-56, July.
    16. Yujian Ye & Dawei Qiu & Huiyu Wang & Yi Tang & Goran Strbac, 2021. "Real-Time Autonomous Residential Demand Response Management Based on Twin Delayed Deep Deterministic Policy Gradient Learning," Energies, MDPI, vol. 14(3), pages 1-22, January.
    17. Alexander N. Kozlov & Nikita V. Tomin & Denis N. Sidorov & Electo E. S. Lora & Victor G. Kurbatsky, 2020. "Optimal Operation Control of PV-Biomass Gasifier-Diesel-Hybrid Systems Using Reinforcement Learning Techniques," Energies, MDPI, vol. 13(10), pages 1-20, May.
    18. Wenhao Zhuo & Andrey V. Savkin, 2019. "Profit Maximizing Control of a Microgrid with Renewable Generation and BESS Based on a Battery Cycle Life Model and Energy Price Forecasting," Energies, MDPI, vol. 12(15), pages 1-17, July.
    19. Zheng, Lingwei & Wu, Hao & Guo, Siqi & Sun, Xinyu, 2023. "Real-time dispatch of an integrated energy system based on multi-stage reinforcement learning with an improved action-choosing strategy," Energy, Elsevier, vol. 277(C).
    20. Yang, Jiaojiao & Sun, Zeyi & Hu, Wenqing & Steinmeister, Louis, 2022. "Joint control of manufacturing and onsite microgrid system via novel neural-network integrated reinforcement learning algorithms," Applied Energy, Elsevier, vol. 315(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jeners:v:14:y:2021:i:18:p:5688-:d:632482. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.