IDEAS home Printed from https://ideas.repec.org/a/gam/jeners/v14y2021i10p2933-d557507.html
   My bibliography  Save this article

Exploring the Potentialities of Deep Reinforcement Learning for Incentive-Based Demand Response in a Cluster of Small Commercial Buildings

Author

Listed:
  • Davide Deltetto

    (TEBE Research Group, BAEDA Lab, Department of Energy “Galileo Ferraris”, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy)

  • Davide Coraci

    (TEBE Research Group, BAEDA Lab, Department of Energy “Galileo Ferraris”, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy)

  • Giuseppe Pinto

    (TEBE Research Group, BAEDA Lab, Department of Energy “Galileo Ferraris”, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy)

  • Marco Savino Piscitelli

    (TEBE Research Group, BAEDA Lab, Department of Energy “Galileo Ferraris”, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy)

  • Alfonso Capozzoli

    (TEBE Research Group, BAEDA Lab, Department of Energy “Galileo Ferraris”, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy)

Abstract

Demand Response (DR) programs represent an effective way to optimally manage building energy demand while increasing Renewable Energy Sources (RES) integration and grid reliability, helping the decarbonization of the electricity sector. To fully exploit such opportunities, buildings are required to become sources of energy flexibility, adapting their energy demand to meet specific grid requirements. However, in most cases, the energy flexibility of a single building is typically too small to be exploited in the flexibility market, highlighting the necessity to perform analysis at a multiple-building scale. This study explores the economic benefits associated with the implementation of a Reinforcement Learning (RL) control strategy for the participation in an incentive-based demand response program of a cluster of commercial buildings. To this purpose, optimized Rule-Based Control (RBC) strategies are compared with a RL controller. Moreover, a hybrid control strategy exploiting both RBC and RL is proposed. Results show that the RL algorithm outperforms the RBC in reducing the total energy cost, but it is less effective in fulfilling DR requirements. The hybrid controller achieves a reduction in energy consumption and energy costs by respectively 7% and 4% compared to a manually optimized RBC, while fulfilling DR constraints during incentive-based events.

Suggested Citation

  • Davide Deltetto & Davide Coraci & Giuseppe Pinto & Marco Savino Piscitelli & Alfonso Capozzoli, 2021. "Exploring the Potentialities of Deep Reinforcement Learning for Incentive-Based Demand Response in a Cluster of Small Commercial Buildings," Energies, MDPI, vol. 14(10), pages 1-25, May.
  • Handle: RePEc:gam:jeners:v:14:y:2021:i:10:p:2933-:d:557507
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1996-1073/14/10/2933/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1996-1073/14/10/2933/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Meyabadi, A. Fattahi & Deihimi, M.H., 2017. "A review of demand-side management: Reconsidering theoretical framework," Renewable and Sustainable Energy Reviews, Elsevier, vol. 80(C), pages 367-379.
    2. Nwulu, Nnamdi I. & Xia, Xiaohua, 2017. "Optimal dispatch for a microgrid incorporating renewables and demand response," Renewable Energy, Elsevier, vol. 101(C), pages 16-28.
    3. Yujian Ye & Dawei Qiu & Huiyu Wang & Yi Tang & Goran Strbac, 2021. "Real-Time Autonomous Residential Demand Response Management Based on Twin Delayed Deep Deterministic Policy Gradient Learning," Energies, MDPI, vol. 14(3), pages 1-22, January.
    4. Auer, Hans & Haas, Reinhard, 2016. "On integrating large shares of variable renewables into the electricity system," Energy, Elsevier, vol. 115(P3), pages 1592-1601.
    5. Lu, Renzhi & Hong, Seung Ho, 2019. "Incentive-based demand response for smart grid with reinforcement learning and deep neural network," Applied Energy, Elsevier, vol. 236(C), pages 937-949.
    6. Wang, Zhe & Hong, Tianzhen, 2020. "Reinforcement learning for building controls: The opportunities and challenges," Applied Energy, Elsevier, vol. 269(C).
    7. Ah-Yun Yoon & Hyun-Koo Kang & Seung-II Moon, 2020. "Optimal Price Based Demand Response of HVAC Systems in Commercial Buildings Considering Peak Load Reduction," Energies, MDPI, vol. 13(4), pages 1-20, February.
    8. Laihyuk Park & Yongwoon Jang & Hyoungchel Bae & Juho Lee & Chang Yun Park & Sungrae Cho, 2017. "Automated Energy Scheduling Algorithms for Residential Demand Response Systems," Energies, MDPI, vol. 10(9), pages 1-17, September.
    9. Mohammadshayan Latifi & Reza Sabzehgar & Poria Fajri & Mohammad Rasouli, 2021. "A Novel Control Strategy for the Frequency and Voltage Regulation of Distribution Grids Using Electric Vehicle Batteries," Energies, MDPI, vol. 14(5), pages 1-16, March.
    10. Michael Short & Sergio Rodriguez & Richard Charlesworth & Tracey Crosbie & Nashwan Dawood, 2019. "Optimal Dispatch of Aggregated HVAC Units for Demand Response: An Industry 4.0 Approach," Energies, MDPI, vol. 12(22), pages 1-20, November.
    11. Antonopoulos, Ioannis & Robu, Valentin & Couraud, Benoit & Kirli, Desen & Norbu, Sonam & Kiprakis, Aristides & Flynn, David & Elizondo-Gonzalez, Sergio & Wattam, Steve, 2020. "Artificial intelligence and machine learning approaches to energy demand-side response: A systematic review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 130(C).
    12. Mohammed M. Olama & Teja Kuruganti & James Nutaro & Jin Dong, 2018. "Coordination and Control of Building HVAC Systems to Provide Frequency Regulation to the Electric Grid," Energies, MDPI, vol. 11(7), pages 1-15, July.
    13. Ran, Fengming & Gao, Dian-ce & Zhang, Xu & Chen, Shuyue, 2020. "A virtual sensor based self-adjusting control for HVAC fast demand response in commercial buildings towards smart grid applications," Applied Energy, Elsevier, vol. 269(C).
    14. Lund, Peter D. & Lindgren, Juuso & Mikkola, Jani & Salpakari, Jyri, 2015. "Review of energy system flexibility measures to enable high levels of variable renewable electricity," Renewable and Sustainable Energy Reviews, Elsevier, vol. 45(C), pages 785-807.
    15. Pinto, Giuseppe & Piscitelli, Marco Savino & Vázquez-Canteli, José Ramón & Nagy, Zoltán & Capozzoli, Alfonso, 2021. "Coordinated energy management for a cluster of buildings through deep reinforcement learning," Energy, Elsevier, vol. 229(C).
    16. Kumar, Kandasamy Nandha & Tseng, King Jet, 2016. "Impact of demand response management on chargeability of electric vehicles," Energy, Elsevier, vol. 111(C), pages 190-196.
    17. Nan, Sibo & Zhou, Ming & Li, Gengyin, 2018. "Optimal residential community demand response scheduling in smart grid," Applied Energy, Elsevier, vol. 210(C), pages 1280-1289.
    18. Vázquez-Canteli, José R. & Nagy, Zoltán, 2019. "Reinforcement learning for demand response: A review of algorithms and modeling techniques," Applied Energy, Elsevier, vol. 235(C), pages 1072-1089.
    19. Wang, Jianhui & Liu, Cong & Ton, Dan & Zhou, Yan & Kim, Jinho & Vyas, Anantray, 2011. "Impact of plug-in hybrid electric vehicles on power systems with demand response and wind power," Energy Policy, Elsevier, vol. 39(7), pages 4016-4021, July.
    20. Ricardo Faia & Pedro Faria & Zita Vale & João Spinola, 2019. "Demand Response Optimization Using Particle Swarm Algorithm Considering Optimum Battery Energy Storage Schedule in a Residential House," Energies, MDPI, vol. 12(9), pages 1-18, April.
    21. Davide Coraci & Silvio Brandi & Marco Savino Piscitelli & Alfonso Capozzoli, 2021. "Online Implementation of a Soft Actor-Critic Agent to Enhance Indoor Temperature Control and Energy Efficiency in Buildings," Energies, MDPI, vol. 14(4), pages 1-26, February.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Pinto, Giuseppe & Kathirgamanathan, Anjukan & Mangina, Eleni & Finn, Donal P. & Capozzoli, Alfonso, 2022. "Enhancing energy management in grid-interactive buildings: A comparison among cooperative and coordinated architectures," Applied Energy, Elsevier, vol. 310(C).
    2. Ayas Shaqour & Aya Hagishima, 2022. "Systematic Review on Deep Reinforcement Learning-Based Energy Management for Different Building Types," Energies, MDPI, vol. 15(22), pages 1-27, November.
    3. Anastasios I. Dounis, 2022. "Machine Intelligence in Smart Buildings," Energies, MDPI, vol. 16(1), pages 1-5, December.
    4. Yinfeng Wang & Longxiang Wang & Xiaoshe Dong, 2021. "An Intelligent TCP Congestion Control Method Based on Deep Q Network," Future Internet, MDPI, vol. 13(10), pages 1-14, October.
    5. Nicolas A. Campbell & Patrick E. Phelan & Miguel Peinado-Guerrero & Jesus R. Villalobos, 2021. "Improved Air-Conditioning Demand Response of Connected Communities over Individually Optimized Buildings," Energies, MDPI, vol. 14(18), pages 1-17, September.
    6. Eduardo J. Salazar & Mauro Jurado & Mauricio E. Samper, 2023. "Reinforcement Learning-Based Pricing and Incentive Strategy for Demand Response in Smart Grids," Energies, MDPI, vol. 16(3), pages 1-33, February.
    7. Pinto, Giuseppe & Deltetto, Davide & Capozzoli, Alfonso, 2021. "Data-driven district energy management with surrogate models and deep reinforcement learning," Applied Energy, Elsevier, vol. 304(C).
    8. Mahdi Khodayar & Jacob Regan, 2023. "Deep Neural Networks in Power Systems: A Review," Energies, MDPI, vol. 16(12), pages 1-38, June.
    9. Nweye, Kingsley & Sankaranarayanan, Siva & Nagy, Zoltan, 2023. "MERLIN: Multi-agent offline and transfer learning for occupant-centric operation of grid-interactive communities," Applied Energy, Elsevier, vol. 346(C).
    10. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    11. Homod, Raad Z. & Togun, Hussein & Kadhim Hussein, Ahmed & Noraldeen Al-Mousawi, Fadhel & Yaseen, Zaher Mundher & Al-Kouz, Wael & Abd, Haider J. & Alawi, Omer A. & Goodarzi, Marjan & Hussein, Omar A., 2022. "Dynamics analysis of a novel hybrid deep clustering for unsupervised learning by reinforcement of multi-agent to energy saving in intelligent buildings," Applied Energy, Elsevier, vol. 313(C).

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    2. Pinto, Giuseppe & Deltetto, Davide & Capozzoli, Alfonso, 2021. "Data-driven district energy management with surrogate models and deep reinforcement learning," Applied Energy, Elsevier, vol. 304(C).
    3. Pinto, Giuseppe & Kathirgamanathan, Anjukan & Mangina, Eleni & Finn, Donal P. & Capozzoli, Alfonso, 2022. "Enhancing energy management in grid-interactive buildings: A comparison among cooperative and coordinated architectures," Applied Energy, Elsevier, vol. 310(C).
    4. Aya Amer & Khaled Shaban & Ahmed Massoud, 2022. "Demand Response in HEMSs Using DRL and the Impact of Its Various Configurations and Environmental Changes," Energies, MDPI, vol. 15(21), pages 1-20, November.
    5. Kanakadhurga, Dharmaraj & Prabaharan, Natarajan, 2022. "Demand side management in microgrid: A critical review of key issues and recent trends," Renewable and Sustainable Energy Reviews, Elsevier, vol. 156(C).
    6. Xie, Jiahan & Ajagekar, Akshay & You, Fengqi, 2023. "Multi-Agent attention-based deep reinforcement learning for demand response in grid-responsive buildings," Applied Energy, Elsevier, vol. 342(C).
    7. Han, Gwangwoo & Joo, Hong-Jin & Lim, Hee-Won & An, Young-Sub & Lee, Wang-Je & Lee, Kyoung-Ho, 2023. "Data-driven heat pump operation strategy using rainbow deep reinforcement learning for significant reduction of electricity cost," Energy, Elsevier, vol. 270(C).
    8. Barja-Martinez, Sara & Aragüés-Peñalba, Mònica & Munné-Collado, Íngrid & Lloret-Gallego, Pau & Bullich-Massagué, Eduard & Villafafila-Robles, Roberto, 2021. "Artificial intelligence techniques for enabling Big Data services in distribution networks: A review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 150(C).
    9. Jordehi, A. Rezaee, 2019. "Optimisation of demand response in electric power systems, a review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 103(C), pages 308-319.
    10. Pinto, Giuseppe & Piscitelli, Marco Savino & Vázquez-Canteli, José Ramón & Nagy, Zoltán & Capozzoli, Alfonso, 2021. "Coordinated energy management for a cluster of buildings through deep reinforcement learning," Energy, Elsevier, vol. 229(C).
    11. Davarzani, Sima & Pisica, Ioana & Taylor, Gareth A. & Munisami, Kevin J., 2021. "Residential Demand Response Strategies and Applications in Active Distribution Network Management," Renewable and Sustainable Energy Reviews, Elsevier, vol. 138(C).
    12. Fernando Lezama & Ricardo Faia & Pedro Faria & Zita Vale, 2020. "Demand Response of Residential Houses Equipped with PV-Battery Systems: An Application Study Using Evolutionary Algorithms," Energies, MDPI, vol. 13(10), pages 1-18, May.
    13. Seongwoo Lee & Joonho Seon & Byungsun Hwang & Soohyun Kim & Youngghyu Sun & Jinyoung Kim, 2024. "Recent Trends and Issues of Energy Management Systems Using Machine Learning," Energies, MDPI, vol. 17(3), pages 1-24, January.
    14. Homod, Raad Z. & Togun, Hussein & Kadhim Hussein, Ahmed & Noraldeen Al-Mousawi, Fadhel & Yaseen, Zaher Mundher & Al-Kouz, Wael & Abd, Haider J. & Alawi, Omer A. & Goodarzi, Marjan & Hussein, Omar A., 2022. "Dynamics analysis of a novel hybrid deep clustering for unsupervised learning by reinforcement of multi-agent to energy saving in intelligent buildings," Applied Energy, Elsevier, vol. 313(C).
    15. Christoforos Menos-Aikateriniadis & Ilias Lamprinos & Pavlos S. Georgilakis, 2022. "Particle Swarm Optimization in Residential Demand-Side Management: A Review on Scheduling and Control Algorithms for Demand Response Provision," Energies, MDPI, vol. 15(6), pages 1-26, March.
    16. Ayas Shaqour & Aya Hagishima, 2022. "Systematic Review on Deep Reinforcement Learning-Based Energy Management for Different Building Types," Energies, MDPI, vol. 15(22), pages 1-27, November.
    17. Cruz, Marco R.M. & Fitiwi, Desta Z. & Santos, Sérgio F. & Catalão, João P.S., 2018. "A comprehensive survey of flexibility options for supporting the low-carbon energy future," Renewable and Sustainable Energy Reviews, Elsevier, vol. 97(C), pages 338-353.
    18. Eduardo J. Salazar & Mauro Jurado & Mauricio E. Samper, 2023. "Reinforcement Learning-Based Pricing and Incentive Strategy for Demand Response in Smart Grids," Energies, MDPI, vol. 16(3), pages 1-33, February.
    19. Tsaousoglou, Georgios & Giraldo, Juan S. & Paterakis, Nikolaos G., 2022. "Market Mechanisms for Local Electricity Markets: A review of models, solution concepts and algorithmic techniques," Renewable and Sustainable Energy Reviews, Elsevier, vol. 156(C).
    20. Perera, A.T.D. & Kamalaruban, Parameswaran, 2021. "Applications of reinforcement learning in energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 137(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jeners:v:14:y:2021:i:10:p:2933-:d:557507. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.