IDEAS home Printed from https://ideas.repec.org/a/gam/jeners/v15y2022i21p8235-d963203.html
   My bibliography  Save this article

Demand Response in HEMSs Using DRL and the Impact of Its Various Configurations and Environmental Changes

Author

Listed:
  • Aya Amer

    (Electrical Engineering Department, Qatar University, Doha 2713, Qatar)

  • Khaled Shaban

    (Computer Science and Engineering Department, Qatar University, Doha 2713, Qatar)

  • Ahmed Massoud

    (Electrical Engineering Department, Qatar University, Doha 2713, Qatar)

Abstract

With smart grid advances, enormous amounts of data are made available, enabling the training of machine learning algorithms such as deep reinforcement learning (DRL). Recent research has utilized DRL to obtain optimal solutions for complex real-time optimization problems, including demand response (DR), where traditional methods fail to meet time and complex requirements. Although DRL has shown good performance for particular use cases, most studies do not report the impacts of various DRL settings. This paper studies the DRL performance when addressing DR in home energy management systems (HEMSs). The trade-offs of various DRL configurations and how they influence the performance of the HEMS are investigated. The main elements that affect the DRL model training are identified, including state-action pairs, reward function, and hyperparameters. Various representations of these elements are analyzed to characterize their impact. In addition, different environmental changes and scenarios are considered to analyze the model’s scalability and adaptability. The findings elucidate the adequacy of DRL to address HEMS challenges since, when appropriately configured, it successfully schedules from 73% to 98% of the appliances in different simulation scenarios and minimizes the electricity cost by 19% to 47%.

Suggested Citation

  • Aya Amer & Khaled Shaban & Ahmed Massoud, 2022. "Demand Response in HEMSs Using DRL and the Impact of Its Various Configurations and Environmental Changes," Energies, MDPI, vol. 15(21), pages 1-20, November.
  • Handle: RePEc:gam:jeners:v:15:y:2022:i:21:p:8235-:d:963203
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1996-1073/15/21/8235/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1996-1073/15/21/8235/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Nwulu, Nnamdi I. & Xia, Xiaohua, 2017. "Optimal dispatch for a microgrid incorporating renewables and demand response," Renewable Energy, Elsevier, vol. 101(C), pages 16-28.
    2. Yeongenn Kwon & Taeyoung Kim & Keon Baek & Jinho Kim, 2020. "Multi-Objective Optimization of Home Appliances and Electric Vehicle Considering Customer’s Benefits and Offsite Shared Photovoltaic Curtailment," Energies, MDPI, vol. 13(11), pages 1-16, June.
    3. Yujian Ye & Dawei Qiu & Huiyu Wang & Yi Tang & Goran Strbac, 2021. "Real-Time Autonomous Residential Demand Response Management Based on Twin Delayed Deep Deterministic Policy Gradient Learning," Energies, MDPI, vol. 14(3), pages 1-22, January.
    4. Aya Amer & Khaled Shaban & Ahmed Gaouda & Ahmed Massoud, 2021. "Home Energy Management System Embedded with a Multi-Objective Demand Response Optimization Model to Benefit Customers and Operators," Energies, MDPI, vol. 14(2), pages 1-19, January.
    5. Athanasios Paraskevas & Dimitrios Aletras & Antonios Chrysopoulos & Antonios Marinopoulos & Dimitrios I. Doukas, 2022. "Optimal Management for EV Charging Stations: A Win–Win Strategy for Different Stakeholders Using Constrained Deep Q-Learning," Energies, MDPI, vol. 15(7), pages 1-24, March.
    6. Lu, Renzhi & Hong, Seung Ho, 2019. "Incentive-based demand response for smart grid with reinforcement learning and deep neural network," Applied Energy, Elsevier, vol. 236(C), pages 937-949.
    7. Amir Mosavi & Mohsen Salimi & Sina Faizollahzadeh Ardabili & Timon Rabczuk & Shahaboddin Shamshirband & Annamaria R. Varkonyi-Koczy, 2019. "State of the Art of Machine Learning Models in Energy Systems, a Systematic Review," Energies, MDPI, vol. 12(7), pages 1-42, April.
    8. Suchitra Dayalan & Sheikh Suhaib Gul & Rajarajeswari Rathinam & George Fernandez Savari & Shady H. E. Abdel Aleem & Mohamed A. Mohamed & Ziad M. Ali, 2022. "Multi-Stage Incentive-Based Demand Response Using a Novel Stackelberg–Particle Swarm Optimization," Sustainability, MDPI, vol. 14(17), pages 1-25, September.
    9. Chongchong Xu & Zhicheng Liao & Chaojie Li & Xiaojun Zhou & Renyou Xie, 2022. "Review on Interpretable Machine Learning in Smart Grid," Energies, MDPI, vol. 15(12), pages 1-31, June.
    10. Vázquez-Canteli, José R. & Nagy, Zoltán, 2019. "Reinforcement learning for demand response: A review of algorithms and modeling techniques," Applied Energy, Elsevier, vol. 235(C), pages 1072-1089.
    11. Honarmand, Masoud & Zakariazadeh, Alireza & Jadid, Shahram, 2014. "Optimal scheduling of electric vehicles in an intelligent parking lot considering vehicle-to-grid concept and battery condition," Energy, Elsevier, vol. 65(C), pages 572-579.
    12. Ricardo Faia & Pedro Faria & Zita Vale & João Spinola, 2019. "Demand Response Optimization Using Particle Swarm Algorithm Considering Optimum Battery Energy Storage Schedule in a Residential House," Energies, MDPI, vol. 12(9), pages 1-18, April.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Davide Deltetto & Davide Coraci & Giuseppe Pinto & Marco Savino Piscitelli & Alfonso Capozzoli, 2021. "Exploring the Potentialities of Deep Reinforcement Learning for Incentive-Based Demand Response in a Cluster of Small Commercial Buildings," Energies, MDPI, vol. 14(10), pages 1-25, May.
    2. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    3. Seongwoo Lee & Joonho Seon & Byungsun Hwang & Soohyun Kim & Youngghyu Sun & Jinyoung Kim, 2024. "Recent Trends and Issues of Energy Management Systems Using Machine Learning," Energies, MDPI, vol. 17(3), pages 1-24, January.
    4. Xie, Jiahan & Ajagekar, Akshay & You, Fengqi, 2023. "Multi-Agent attention-based deep reinforcement learning for demand response in grid-responsive buildings," Applied Energy, Elsevier, vol. 342(C).
    5. Correa-Jullian, Camila & López Droguett, Enrique & Cardemil, José Miguel, 2020. "Operation scheduling in a solar thermal system: A reinforcement learning-based framework," Applied Energy, Elsevier, vol. 268(C).
    6. Ibrahim, Muhammad Sohail & Dong, Wei & Yang, Qiang, 2020. "Machine learning driven smart electric power systems: Current trends and new perspectives," Applied Energy, Elsevier, vol. 272(C).
    7. Pinto, Giuseppe & Piscitelli, Marco Savino & Vázquez-Canteli, José Ramón & Nagy, Zoltán & Capozzoli, Alfonso, 2021. "Coordinated energy management for a cluster of buildings through deep reinforcement learning," Energy, Elsevier, vol. 229(C).
    8. Ussama Assad & Muhammad Arshad Shehzad Hassan & Umar Farooq & Asif Kabir & Muhammad Zeeshan Khan & S. Sabahat H. Bukhari & Zain ul Abidin Jaffri & Judit Oláh & József Popp, 2022. "Smart Grid, Demand Response and Optimization: A Critical Review of Computational Methods," Energies, MDPI, vol. 15(6), pages 1-36, March.
    9. Davarzani, Sima & Pisica, Ioana & Taylor, Gareth A. & Munisami, Kevin J., 2021. "Residential Demand Response Strategies and Applications in Active Distribution Network Management," Renewable and Sustainable Energy Reviews, Elsevier, vol. 138(C).
    10. Wen, Lulu & Zhou, Kaile & Li, Jun & Wang, Shanyong, 2020. "Modified deep learning and reinforcement learning for an incentive-based demand response model," Energy, Elsevier, vol. 205(C).
    11. Fernando Lezama & Ricardo Faia & Pedro Faria & Zita Vale, 2020. "Demand Response of Residential Houses Equipped with PV-Battery Systems: An Application Study Using Evolutionary Algorithms," Energies, MDPI, vol. 13(10), pages 1-18, May.
    12. Pallonetto, Fabiano & De Rosa, Mattia & Milano, Federico & Finn, Donal P., 2019. "Demand response algorithms for smart-grid ready residential buildings using machine learning models," Applied Energy, Elsevier, vol. 239(C), pages 1265-1282.
    13. Sabarathinam Srinivasan & Suresh Kumarasamy & Zacharias E. Andreadakis & Pedro G. Lind, 2023. "Artificial Intelligence and Mathematical Models of Power Grids Driven by Renewable Energy Sources: A Survey," Energies, MDPI, vol. 16(14), pages 1-56, July.
    14. Pinto, Giuseppe & Deltetto, Davide & Capozzoli, Alfonso, 2021. "Data-driven district energy management with surrogate models and deep reinforcement learning," Applied Energy, Elsevier, vol. 304(C).
    15. Park, Keonwoo & Moon, Ilkyeong, 2022. "Multi-agent deep reinforcement learning approach for EV charging scheduling in a smart grid," Applied Energy, Elsevier, vol. 328(C).
    16. Antonopoulos, Ioannis & Robu, Valentin & Couraud, Benoit & Kirli, Desen & Norbu, Sonam & Kiprakis, Aristides & Flynn, David & Elizondo-Gonzalez, Sergio & Wattam, Steve, 2020. "Artificial intelligence and machine learning approaches to energy demand-side response: A systematic review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 130(C).
    17. Charbonnier, Flora & Morstyn, Thomas & McCulloch, Malcolm D., 2022. "Coordination of resources at the edge of the electricity grid: Systematic review and taxonomy," Applied Energy, Elsevier, vol. 318(C).
    18. Yi Kuang & Xiuli Wang & Hongyang Zhao & Yijun Huang & Xianlong Chen & Xifan Wang, 2020. "Agent-Based Energy Sharing Mechanism Using Deep Deterministic Policy Gradient Algorithm," Energies, MDPI, vol. 13(19), pages 1-20, September.
    19. Kong, Xiangyu & Kong, Deqian & Yao, Jingtao & Bai, Linquan & Xiao, Jie, 2020. "Online pricing of demand response based on long short-term memory and reinforcement learning," Applied Energy, Elsevier, vol. 271(C).
    20. Zeng, Lanting & Qiu, Dawei & Sun, Mingyang, 2022. "Resilience enhancement of multi-agent reinforcement learning-based demand response against adversarial attacks," Applied Energy, Elsevier, vol. 324(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jeners:v:15:y:2022:i:21:p:8235-:d:963203. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.