IDEAS home Printed from https://ideas.repec.org/a/eee/rensus/v137y2021ics1364032120309023.html
   My bibliography  Save this article

Applications of reinforcement learning in energy systems

Author

Listed:
  • Perera, A.T.D.
  • Kamalaruban, Parameswaran

Abstract

Energy systems undergo major transitions to facilitate the large-scale penetration of renewable energy technologies and improve efficiencies, leading to the integration of many sectors into the energy system domain. As the complexities in this domain increase, it becomes challenging to control energy flows using existing techniques based on physical models. Moreover, although data-driven models, such as reinforcement learning (RL), have gained considerable attention in many fields, a direct shift into RL is not feasible in the energy domain irrespective of the ongoing complexities. To this end, a top-down approach is used to understand this behavior by reviewing the current state of the art.

Suggested Citation

  • Perera, A.T.D. & Kamalaruban, Parameswaran, 2021. "Applications of reinforcement learning in energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 137(C).
  • Handle: RePEc:eee:rensus:v:137:y:2021:i:c:s1364032120309023
    DOI: 10.1016/j.rser.2020.110618
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S1364032120309023
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.rser.2020.110618?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Kazmi, H. & D’Oca, S. & Delmastro, C. & Lodeweyckx, S. & Corgnati, S.P., 2016. "Generalizable occupant-driven optimization model for domestic hot water production in NZEB," Applied Energy, Elsevier, vol. 175(C), pages 1-15.
    2. Musonye, Xavier S. & Davíðsdóttir, Brynhildur & Kristjánsson, Ragnar & Ásgeirsson, Eyjólfur I. & Stefánsson, Hlynur, 2020. "Integrated energy systems’ modeling studies for sub-Saharan Africa: A scoping review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 128(C).
    3. Lu, Renzhi & Hong, Seung Ho, 2019. "Incentive-based demand response for smart grid with reinforcement learning and deep neural network," Applied Energy, Elsevier, vol. 236(C), pages 937-949.
    4. Wang, Zhe & Hong, Tianzhen, 2020. "Reinforcement learning for building controls: The opportunities and challenges," Applied Energy, Elsevier, vol. 269(C).
    5. Perera, A.T.D. & Attalage, R.A. & Perera, K.K.C.K. & Dassanayake, V.P.C., 2013. "Designing standalone hybrid energy systems minimizing initial investment, life cycle cost and pollutant emission," Energy, Elsevier, vol. 54(C), pages 220-230.
    6. Wu, Yuankai & Tan, Huachun & Peng, Jiankun & Zhang, Hailong & He, Hongwen, 2019. "Deep reinforcement learning of energy management with continuous control strategy and traffic information for a series-parallel plug-in hybrid electric bus," Applied Energy, Elsevier, vol. 247(C), pages 454-466.
    7. Kazmi, Hussain & Mehmood, Fahad & Lodeweyckx, Stefan & Driesen, Johan, 2018. "Gigawatt-hour scale savings on a budget of zero: Deep reinforcement learning based optimal control of hot water systems," Energy, Elsevier, vol. 144(C), pages 159-168.
    8. Frederik Ruelens & Sandro Iacovella & Bert J. Claessens & Ronnie Belmans, 2015. "Learning Agent for a Heat-Pump Thermostat with a Set-Back Strategy Using Model-Free Reinforcement Learning," Energies, MDPI, vol. 8(8), pages 1-19, August.
    9. Kofinas, P. & Dounis, A.I. & Vouros, G.A., 2018. "Fuzzy Q-Learning for multi-agent decentralized energy management in microgrids," Applied Energy, Elsevier, vol. 219(C), pages 53-67.
    10. Mohajeri, Nahid & Perera, A.T.D. & Coccolo, Silvia & Mosca, Lucas & Le Guen, Morgane & Scartezzini, Jean-Louis, 2019. "Integrating urban form and distributed energy systems: Assessment of sustainable development scenarios for a Swiss village to 2050," Renewable Energy, Elsevier, vol. 143(C), pages 810-826.
    11. Ying Ji & Jianhui Wang & Jiacan Xu & Xiaoke Fang & Huaguang Zhang, 2019. "Real-Time Energy Management of a Microgrid Using Deep Reinforcement Learning," Energies, MDPI, vol. 12(12), pages 1-21, June.
    12. Zou, Yuan & Liu, Teng & Liu, Dexing & Sun, Fengchun, 2016. "Reinforcement learning-based real-time energy management for a hybrid tracked vehicle," Applied Energy, Elsevier, vol. 171(C), pages 372-382.
    13. Zhou, Quan & Li, Ji & Shuai, Bin & Williams, Huw & He, Yinglong & Li, Ziyang & Xu, Hongming & Yan, Fuwu, 2019. "Multi-step reinforcement learning for model-free predictive energy management of an electrified off-highway vehicle," Applied Energy, Elsevier, vol. 255(C).
    14. Garud N. Iyengar, 2005. "Robust Dynamic Programming," Mathematics of Operations Research, INFORMS, vol. 30(2), pages 257-280, May.
    15. Perera, A.T.D. & Wickremasinghe, D.M.I.J. & Mahindarathna, D.V.S. & Attalage, R.A. & Perera, K.K.C.K. & Bartholameuz, E.M., 2012. "Sensitivity of internal combustion generator capacity in standalone hybrid energy systems," Energy, Elsevier, vol. 39(1), pages 403-411.
    16. David Silver & Julian Schrittwieser & Karen Simonyan & Ioannis Antonoglou & Aja Huang & Arthur Guez & Thomas Hubert & Lucas Baker & Matthew Lai & Adrian Bolton & Yutian Chen & Timothy Lillicrap & Fan , 2017. "Mastering the game of Go without human knowledge," Nature, Nature, vol. 550(7676), pages 354-359, October.
    17. Zhou, Min & Wang, Bo & Li, Tiantian & Watada, Junzo, 2018. "A data-driven approach for multi-objective unit commitment under hybrid uncertainties," Energy, Elsevier, vol. 164(C), pages 722-733.
    18. Ardi Tampuu & Tambet Matiisen & Dorian Kodelja & Ilya Kuzovkin & Kristjan Korjus & Juhan Aru & Jaan Aru & Raul Vicente, 2017. "Multiagent cooperation and competition with deep reinforcement learning," PLOS ONE, Public Library of Science, vol. 12(4), pages 1-15, April.
    19. Perera, A.T.D. & Nik, Vahid M. & Mauree, Dasaraden & Scartezzini, Jean-Louis, 2017. "Electrical hubs: An effective way to integrate non-dispatchable renewable energy sources with minimum impact to the grid," Applied Energy, Elsevier, vol. 190(C), pages 232-248.
    20. Kazmi, Hussain & Suykens, Johan & Balint, Attila & Driesen, Johan, 2019. "Multi-agent reinforcement learning for modeling and control of thermostatically controlled loads," Applied Energy, Elsevier, vol. 238(C), pages 1022-1035.
    21. Lu, Renzhi & Hong, Seung Ho & Zhang, Xiongfeng, 2018. "A Dynamic pricing demand response algorithm for smart grid: Reinforcement learning approach," Applied Energy, Elsevier, vol. 220(C), pages 220-230.
    22. Hua, Haochen & Qin, Yuchao & Hao, Chuantong & Cao, Junwei, 2019. "Optimal energy management strategies for energy Internet via deep reinforcement learning approach," Applied Energy, Elsevier, vol. 239(C), pages 598-609.
    23. Nanduri, Vishnu & Kazemzadeh, Narges, 2012. "Economic impact assessment and operational decision making in emission and transmission constrained electricity markets," Applied Energy, Elsevier, vol. 96(C), pages 212-221.
    24. Junwei Cao & Wanlu Zhang & Zeqing Xiao & Haochen Hua, 2019. "Reactive Power Optimization for Transient Voltage Stability in Energy Internet via Deep Reinforcement Learning Approach," Energies, MDPI, vol. 12(8), pages 1-17, April.
    25. Keirstead, James & Jennings, Mark & Sivakumar, Aruna, 2012. "A review of urban energy system models: Approaches, challenges and opportunities," Renewable and Sustainable Energy Reviews, Elsevier, vol. 16(6), pages 3847-3866.
    26. Xiong, Rui & Cao, Jiayi & Yu, Quanqing, 2018. "Reinforcement learning-based real-time power management for hybrid energy storage system in the plug-in hybrid electric vehicle," Applied Energy, Elsevier, vol. 211(C), pages 538-548.
    27. Perera, A.T.D. & Wickramasinghe, P.U. & Nik, Vahid M. & Scartezzini, Jean-Louis, 2020. "Introducing reinforcement learning to the energy system design process," Applied Energy, Elsevier, vol. 262(C).
    28. Sansavini, G. & Piccinelli, R. & Golea, L.R. & Zio, E., 2014. "A stochastic framework for uncertainty analysis in electric power transmission systems with wind generation," Renewable Energy, Elsevier, vol. 64(C), pages 71-81.
    29. Rocchetta, R. & Bellani, L. & Compare, M. & Zio, E. & Patelli, E., 2019. "A reinforcement learning framework for optimal operation and maintenance of power grids," Applied Energy, Elsevier, vol. 241(C), pages 291-301.
    30. Mavromatidis, Georgios & Orehounig, Kristina & Carmeliet, Jan, 2018. "A review of uncertainty characterisation approaches for the optimal design of distributed energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 88(C), pages 258-277.
    31. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    32. Manfren, Massimiliano & Caputo, Paola & Costa, Gaia, 2011. "Paradigm shift in urban energy systems through distributed generation: Methods and models," Applied Energy, Elsevier, vol. 88(4), pages 1032-1048, April.
    33. Kofinas, P. & Doltsinis, S. & Dounis, A.I. & Vouros, G.A., 2017. "A reinforcement learning approach for MPPT control method of photovoltaic sources," Renewable Energy, Elsevier, vol. 108(C), pages 461-473.
    34. Zhang, Xiaoshun & Li, Shengnan & He, Tingyi & Yang, Bo & Yu, Tao & Li, Haofei & Jiang, Lin & Sun, Liming, 2019. "Memetic reinforcement learning based maximum power point tracking design for PV systems under partial shading condition," Energy, Elsevier, vol. 174(C), pages 1079-1090.
    35. Vázquez-Canteli, José R. & Nagy, Zoltán, 2019. "Reinforcement learning for demand response: A review of algorithms and modeling techniques," Applied Energy, Elsevier, vol. 235(C), pages 1072-1089.
    36. Liu, Teng & Wang, Bo & Yang, Chenglang, 2018. "Online Markov Chain-based energy management for a hybrid tracked vehicle with speedy Q-learning," Energy, Elsevier, vol. 160(C), pages 544-555.
    37. Kalogirou, Soteris A., 2000. "Applications of artificial neural-networks for energy systems," Applied Energy, Elsevier, vol. 67(1-2), pages 17-35, September.
    38. Buttler, Alexander & Spliethoff, Hartmut, 2018. "Current status of water electrolysis for energy storage, grid balancing and sector coupling via power-to-gas and power-to-liquids: A review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 82(P3), pages 2440-2454.
    39. Aitor Saenz-Aguirre & Ekaitz Zulueta & Unai Fernandez-Gamiz & Javier Lozano & Jose Manuel Lopez-Guede, 2019. "Artificial Neural Network Based Reinforcement Learning for Wind Turbine Yaw Control," Energies, MDPI, vol. 12(3), pages 1-17, January.
    40. Mauree, Dasaraden & Naboni, Emanuele & Coccolo, Silvia & Perera, A.T.D. & Nik, Vahid M. & Scartezzini, Jean-Louis, 2019. "A review of assessment methods for the urban environment and its energy sustainability to guarantee climate adaptation of future cities," Renewable and Sustainable Energy Reviews, Elsevier, vol. 112(C), pages 733-746.
    41. A. T. D. Perera & Vahid M. Nik & Deliang Chen & Jean-Louis Scartezzini & Tianzhen Hong, 2020. "Quantifying the impacts of climate change and extreme climate events on energy systems," Nature Energy, Nature, vol. 5(2), pages 150-159, February.
    42. Perera, A.T.D. & Nik, Vahid M. & Mauree, Dasaraden & Scartezzini, Jean-Louis, 2017. "An integrated approach to design site specific distributed electrical hubs combining optimization, multi-criterion assessment and decision making," Energy, Elsevier, vol. 134(C), pages 103-120.
    43. Perera, A.T.D. & Attalage, R.A. & Perera, K.K.C.K. & Dassanayake, V.P.C., 2013. "A hybrid tool to combine multi-objective optimization and multi-criterion decision making in designing standalone hybrid energy systems," Applied Energy, Elsevier, vol. 107(C), pages 412-425.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    2. Dimitrios Vamvakas & Panagiotis Michailidis & Christos Korkas & Elias Kosmatopoulos, 2023. "Review and Evaluation of Reinforcement Learning Frameworks on Smart Grid Applications," Energies, MDPI, vol. 16(14), pages 1-38, July.
    3. Yi Kuang & Xiuli Wang & Hongyang Zhao & Yijun Huang & Xianlong Chen & Xifan Wang, 2020. "Agent-Based Energy Sharing Mechanism Using Deep Deterministic Policy Gradient Algorithm," Energies, MDPI, vol. 13(19), pages 1-20, September.
    4. Perera, A.T.D. & Zhao, Bingyu & Wang, Zhe & Soga, Kenichi & Hong, Tianzhen, 2023. "Optimal design of microgrids to improve wildfire resilience for vulnerable communities at the wildland-urban interface," Applied Energy, Elsevier, vol. 335(C).
    5. Vázquez-Canteli, José R. & Nagy, Zoltán, 2019. "Reinforcement learning for demand response: A review of algorithms and modeling techniques," Applied Energy, Elsevier, vol. 235(C), pages 1072-1089.
    6. Perera, A.T.D. & Wickramasinghe, P.U. & Nik, Vahid M. & Scartezzini, Jean-Louis, 2020. "Introducing reinforcement learning to the energy system design process," Applied Energy, Elsevier, vol. 262(C).
    7. Pinto, Giuseppe & Piscitelli, Marco Savino & Vázquez-Canteli, José Ramón & Nagy, Zoltán & Capozzoli, Alfonso, 2021. "Coordinated energy management for a cluster of buildings through deep reinforcement learning," Energy, Elsevier, vol. 229(C).
    8. Daniel Egan & Qilun Zhu & Robert Prucka, 2023. "A Review of Reinforcement Learning-Based Powertrain Controllers: Effects of Agent Selection for Mixed-Continuity Control and Reward Formulation," Energies, MDPI, vol. 16(8), pages 1-31, April.
    9. Touzani, Samir & Prakash, Anand Krishnan & Wang, Zhe & Agarwal, Shreya & Pritoni, Marco & Kiran, Mariam & Brown, Richard & Granderson, Jessica, 2021. "Controlling distributed energy resources via deep reinforcement learning for load flexibility and energy efficiency," Applied Energy, Elsevier, vol. 304(C).
    10. Kong, Xiangyu & Kong, Deqian & Yao, Jingtao & Bai, Linquan & Xiao, Jie, 2020. "Online pricing of demand response based on long short-term memory and reinforcement learning," Applied Energy, Elsevier, vol. 271(C).
    11. Nik, Vahid M. & Moazami, Amin, 2021. "Using collective intelligence to enhance demand flexibility and climate resilience in urban areas," Applied Energy, Elsevier, vol. 281(C).
    12. Perera, A.T.D. & Javanroodi, Kavan & Nik, Vahid M., 2021. "Climate resilient interconnected infrastructure: Co-optimization of energy systems and urban morphology," Applied Energy, Elsevier, vol. 285(C).
    13. Lu, Renzhi & Li, Yi-Chang & Li, Yuting & Jiang, Junhui & Ding, Yuemin, 2020. "Multi-agent deep reinforcement learning based demand response for discrete manufacturing systems energy management," Applied Energy, Elsevier, vol. 276(C).
    14. Perera, A.T.D. & Coccolo, Silvia & Scartezzini, Jean-Louis & Mauree, Dasaraden, 2018. "Quantifying the impact of urban climate by extending the boundaries of urban energy system modeling," Applied Energy, Elsevier, vol. 222(C), pages 847-860.
    15. Perera, A.T.D. & Nik, Vahid M. & Wickramasinghe, P.U. & Scartezzini, Jean-Louis, 2019. "Redefining energy system flexibility for distributed energy system design," Applied Energy, Elsevier, vol. 253(C), pages 1-1.
    16. Yang, Ningkang & Han, Lijin & Xiang, Changle & Liu, Hui & Li, Xunmin, 2021. "An indirect reinforcement learning based real-time energy management strategy via high-order Markov Chain model for a hybrid electric vehicle," Energy, Elsevier, vol. 236(C).
    17. Correa-Jullian, Camila & López Droguett, Enrique & Cardemil, José Miguel, 2020. "Operation scheduling in a solar thermal system: A reinforcement learning-based framework," Applied Energy, Elsevier, vol. 268(C).
    18. Wu, Yuankai & Tan, Huachun & Peng, Jiankun & Zhang, Hailong & He, Hongwen, 2019. "Deep reinforcement learning of energy management with continuous control strategy and traffic information for a series-parallel plug-in hybrid electric bus," Applied Energy, Elsevier, vol. 247(C), pages 454-466.
    19. Pinto, Giuseppe & Deltetto, Davide & Capozzoli, Alfonso, 2021. "Data-driven district energy management with surrogate models and deep reinforcement learning," Applied Energy, Elsevier, vol. 304(C).
    20. Karni Siraganyan & Amarasinghage Tharindu Dasun Perera & Jean-Louis Scartezzini & Dasaraden Mauree, 2019. "Eco-Sim: A Parametric Tool to Evaluate the Environmental and Economic Feasibility of Decentralized Energy Systems," Energies, MDPI, vol. 12(5), pages 1-22, February.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:rensus:v:137:y:2021:i:c:s1364032120309023. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/600126/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.