IDEAS home Printed from https://ideas.repec.org/a/eee/energy/v238y2022ipcs0360544221022064.html
   My bibliography  Save this article

Data-driven battery operation for energy arbitrage using rainbow deep reinforcement learning

Author

Listed:
  • Harrold, Daniel J.B.
  • Cao, Jun
  • Fan, Zhong

Abstract

As the world seeks to become more sustainable, intelligent solutions are needed to increase the penetration of renewable energy. In this paper, the model-free deep reinforcement learning algorithm Rainbow Deep Q-Networks is used to control a battery in a microgrid to perform energy arbitrage and more efficiently utilise solar and wind energy sources. The grid operates with its own demand and renewable generation, as well as dynamic energy pricing from a real wholesale energy market. Four scenarios are tested including using demand and price forecasting produced with local weather data. The algorithm and its subcomponents are evaluated against an actor-critic method and a linear programming model with Rainbow able to outperform all other methods. This research shows the importance of using the distributional approach for reinforcement learning for complex environments, as well as how it can visualise and contextualise the agents’ behaviour for real-world applications.

Suggested Citation

  • Harrold, Daniel J.B. & Cao, Jun & Fan, Zhong, 2022. "Data-driven battery operation for energy arbitrage using rainbow deep reinforcement learning," Energy, Elsevier, vol. 238(PC).
  • Handle: RePEc:eee:energy:v:238:y:2022:i:pc:s0360544221022064
    DOI: 10.1016/j.energy.2021.121958
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0360544221022064
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.energy.2021.121958?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Andresen, Gorm B. & Rodriguez, Rolando A. & Becker, Sarah & Greiner, Martin, 2014. "The potential for arbitrage of wind and solar surplus power in Denmark," Energy, Elsevier, vol. 76(C), pages 49-58.
    2. Díaz, Guzmán & Gómez-Aleixandre, Javier & Coto, José & Conejero, Olga, 2018. "Maximum income resulting from energy arbitrage by battery systems subject to cycle aging and price uncertainty from a dynamic programming perspective," Energy, Elsevier, vol. 156(C), pages 647-660.
    3. Richard Bellman, 1954. "Some Applications of the Theory of Dynamic Programming---A Review," Operations Research, INFORMS, vol. 2(3), pages 275-288, August.
    4. Kofinas, P. & Dounis, A.I. & Vouros, G.A., 2018. "Fuzzy Q-Learning for multi-agent decentralized energy management in microgrids," Applied Energy, Elsevier, vol. 219(C), pages 53-67.
    5. Pinto, Giuseppe & Piscitelli, Marco Savino & Vázquez-Canteli, José Ramón & Nagy, Zoltán & Capozzoli, Alfonso, 2021. "Coordinated energy management for a cluster of buildings through deep reinforcement learning," Energy, Elsevier, vol. 229(C).
    6. Totaro, Simone & Boukas, Ioannis & Jonsson, Anders & Cornélusse, Bertrand, 2021. "Lifelong control of off-grid microgrid with model-based reinforcement learning," Energy, Elsevier, vol. 232(C).
    7. Richard Bellman, 1954. "On some applications of the theory of dynamic programming to logistics," Naval Research Logistics Quarterly, John Wiley & Sons, vol. 1(2), pages 141-153, June.
    8. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    9. Hansen, Kenneth & Breyer, Christian & Lund, Henrik, 2019. "Status and perspectives on 100% renewable energy systems," Energy, Elsevier, vol. 175(C), pages 471-480.
    10. Perera, A.T.D. & Kamalaruban, Parameswaran, 2021. "Applications of reinforcement learning in energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 137(C).
    11. Vázquez-Canteli, José R. & Nagy, Zoltán, 2019. "Reinforcement learning for demand response: A review of algorithms and modeling techniques," Applied Energy, Elsevier, vol. 235(C), pages 1072-1089.
    12. Kuznetsova, Elizaveta & Li, Yan-Fu & Ruiz, Carlos & Zio, Enrico & Ault, Graham & Bell, Keith, 2013. "Reinforcement learning for microgrid energy management," Energy, Elsevier, vol. 59(C), pages 133-146.
    13. Carrillo, C. & Obando Montaño, A.F. & Cidrás, J. & Díaz-Dorado, E., 2013. "Review of power curve modelling for wind turbines," Renewable and Sustainable Energy Reviews, Elsevier, vol. 21(C), pages 572-581.
    14. Bogdanov, Dmitrii & Ram, Manish & Aghahosseini, Arman & Gulagi, Ashish & Oyewo, Ayobami Solomon & Child, Michael & Caldera, Upeksha & Sadovskaia, Kristina & Farfan, Javier & De Souza Noel Simas Barbos, 2021. "Low-cost renewable electricity as the key driver of the global energy transition towards sustainability," Energy, Elsevier, vol. 227(C).
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Zhu, Ziqing & Hu, Ze & Chan, Ka Wing & Bu, Siqi & Zhou, Bin & Xia, Shiwei, 2023. "Reinforcement learning in deregulated energy market: A comprehensive review," Applied Energy, Elsevier, vol. 329(C).
    2. Harrold, Daniel J.B. & Cao, Jun & Fan, Zhong, 2022. "Renewable energy integration and microgrid energy trading using multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 318(C).
    3. Lai, Chun Sing & Chen, Dashen & Zhang, Jinning & Zhang, Xin & Xu, Xu & Taylor, Gareth A. & Lai, Loi Lei, 2022. "Profit maximization for large-scale energy storage systems to enable fast EV charging infrastructure in distribution networks," Energy, Elsevier, vol. 259(C).
    4. Soleimanzade, Mohammad Amin & Kumar, Amit & Sadrzadeh, Mohtada, 2022. "Novel data-driven energy management of a hybrid photovoltaic-reverse osmosis desalination system using deep reinforcement learning," Applied Energy, Elsevier, vol. 317(C).
    5. Sai, Wei & Pan, Zehua & Liu, Siyu & Jiao, Zhenjun & Zhong, Zheng & Miao, Bin & Chan, Siew Hwa, 2023. "Event-driven forecasting of wholesale electricity price and frequency regulation price using machine learning algorithms," Applied Energy, Elsevier, vol. 352(C).
    6. Shabani, Masoume & Wallin, Fredrik & Dahlquist, Erik & Yan, Jinyue, 2023. "The impact of battery operating management strategies on life cycle cost assessment in real power market for a grid-connected residential battery application," Energy, Elsevier, vol. 270(C).
    7. Ruisheng Wang & Zhong Chen & Qiang Xing & Ziqi Zhang & Tian Zhang, 2022. "A Modified Rainbow-Based Deep Reinforcement Learning Method for Optimal Scheduling of Charging Station," Sustainability, MDPI, vol. 14(3), pages 1-14, February.
    8. Zhou, Yanting & Ma, Zhongjing & Zhang, Jinhui & Zou, Suli, 2022. "Data-driven stochastic energy management of multi energy system using deep reinforcement learning," Energy, Elsevier, vol. 261(PA).

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Harrold, Daniel J.B. & Cao, Jun & Fan, Zhong, 2022. "Renewable energy integration and microgrid energy trading using multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 318(C).
    2. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    3. Esmaeili Aliabadi, Danial & Chan, Katrina, 2022. "The emerging threat of artificial intelligence on competition in liberalized electricity markets: A deep Q-network approach," Applied Energy, Elsevier, vol. 325(C).
    4. Mahmoud Mahfouz & Angelos Filos & Cyrine Chtourou & Joshua Lockhart & Samuel Assefa & Manuela Veloso & Danilo Mandic & Tucker Balch, 2019. "On the Importance of Opponent Modeling in Auction Markets," Papers 1911.12816, arXiv.org.
    5. Boute, Robert N. & Gijsbrechts, Joren & van Jaarsveld, Willem & Vanvuchelen, Nathalie, 2022. "Deep reinforcement learning for inventory control: A roadmap," European Journal of Operational Research, Elsevier, vol. 298(2), pages 401-412.
    6. Li, Yanxue & Wang, Zixuan & Xu, Wenya & Gao, Weijun & Xu, Yang & Xiao, Fu, 2023. "Modeling and energy dynamic control for a ZEH via hybrid model-based deep reinforcement learning," Energy, Elsevier, vol. 277(C).
    7. Pinto, Giuseppe & Deltetto, Davide & Capozzoli, Alfonso, 2021. "Data-driven district energy management with surrogate models and deep reinforcement learning," Applied Energy, Elsevier, vol. 304(C).
    8. Pinto, Giuseppe & Kathirgamanathan, Anjukan & Mangina, Eleni & Finn, Donal P. & Capozzoli, Alfonso, 2022. "Enhancing energy management in grid-interactive buildings: A comparison among cooperative and coordinated architectures," Applied Energy, Elsevier, vol. 310(C).
    9. Zheng, Lingwei & Wu, Hao & Guo, Siqi & Sun, Xinyu, 2023. "Real-time dispatch of an integrated energy system based on multi-stage reinforcement learning with an improved action-choosing strategy," Energy, Elsevier, vol. 277(C).
    10. Dimitrios Vamvakas & Panagiotis Michailidis & Christos Korkas & Elias Kosmatopoulos, 2023. "Review and Evaluation of Reinforcement Learning Frameworks on Smart Grid Applications," Energies, MDPI, vol. 16(14), pages 1-38, July.
    11. Biemann, Marco & Scheller, Fabian & Liu, Xiufeng & Huang, Lizhen, 2021. "Experimental evaluation of model-free reinforcement learning algorithms for continuous HVAC control," Applied Energy, Elsevier, vol. 298(C).
    12. Han, Gwangwoo & Joo, Hong-Jin & Lim, Hee-Won & An, Young-Sub & Lee, Wang-Je & Lee, Kyoung-Ho, 2023. "Data-driven heat pump operation strategy using rainbow deep reinforcement learning for significant reduction of electricity cost," Energy, Elsevier, vol. 270(C).
    13. Perera, A.T.D. & Kamalaruban, Parameswaran, 2021. "Applications of reinforcement learning in energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 137(C).
    14. Caputo, Cesare & Cardin, Michel-Alexandre & Ge, Pudong & Teng, Fei & Korre, Anna & Antonio del Rio Chanona, Ehecatl, 2023. "Design and planning of flexible mobile Micro-Grids using Deep Reinforcement Learning," Applied Energy, Elsevier, vol. 335(C).
    15. Guo, Chenyu & Wang, Xin & Zheng, Yihui & Zhang, Feng, 2022. "Real-time optimal energy management of microgrid with uncertainties based on deep reinforcement learning," Energy, Elsevier, vol. 238(PC).
    16. Soleimanzade, Mohammad Amin & Kumar, Amit & Sadrzadeh, Mohtada, 2022. "Novel data-driven energy management of a hybrid photovoltaic-reverse osmosis desalination system using deep reinforcement learning," Applied Energy, Elsevier, vol. 317(C).
    17. Qiu, Dawei & Wang, Yi & Hua, Weiqi & Strbac, Goran, 2023. "Reinforcement learning for electric vehicle applications in power systems:A critical review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 173(C).
    18. Van-Hai Bui & Akhtar Hussain & Hak-Man Kim, 2019. "Q-Learning-Based Operation Strategy for Community Battery Energy Storage System (CBESS) in Microgrid System," Energies, MDPI, vol. 12(9), pages 1-17, May.
    19. Oyewo, Ayobami Solomon & Solomon, A.A. & Bogdanov, Dmitrii & Aghahosseini, Arman & Mensah, Theophilus Nii Odai & Ram, Manish & Breyer, Christian, 2021. "Just transition towards defossilised energy systems for developing economies: A case study of Ethiopia," Renewable Energy, Elsevier, vol. 176(C), pages 346-365.
    20. Xiaoyue Li & John M. Mulvey, 2023. "Optimal Portfolio Execution in a Regime-switching Market with Non-linear Impact Costs: Combining Dynamic Program and Neural Network," Papers 2306.08809, arXiv.org.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:energy:v:238:y:2022:i:pc:s0360544221022064. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.journals.elsevier.com/energy .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.