IDEAS home Printed from https://ideas.repec.org/a/gam/jeners/v15y2022i23p9220-d994308.html
   My bibliography  Save this article

Dynamic DNR and Solar PV Smart Inverter Control Scheme Using Heterogeneous Multi-Agent Deep Reinforcement Learning

Author

Listed:
  • Se-Heon Lim

    (Department of Electrical Engineering, Soongsil University, Seoul 06978, Republic of Korea)

  • Sung-Guk Yoon

    (Department of Electrical Engineering, Soongsil University, Seoul 06978, Republic of Korea)

Abstract

The conventional volt-VAR control (VVC) in distribution systems has limitations in solving the overvoltage problem caused by massive solar photovoltaic (PV) deployment. As an alternative method, VVC using solar PV smart inverters (PVSIs) has come into the limelight, which can respond quickly and effectively to solve the overvoltage problem by absorbing reactive power. However, the network power loss, that is, the sum of line losses in the distribution network, increases with reactive power. Dynamic distribution network reconfiguration (DNR), which hourly controls the network topology by controlling sectionalizing and tie switches, can also solve the overvoltage problem and reduce network loss by changing the power flow in the network. In this study, to improve the voltage profile and minimize the network power loss, we propose a control scheme that integrates the dynamic DNR with volt-VAR control of PVSIs. The proposed control scheme is practically usable for three reasons: Primarily, the proposed scheme is based on a deep reinforcement learning (DRL) algorithm, which does not require accurate distribution system parameters. Furthermore, we propose the use of a heterogeneous multiagent DRL algorithm to control the switches centrally and PVSIs locally. Finally, a practical communication network in the distribution system is assumed. PVSIs only send their status to the central control center, and there is no communication between the PVSIs. A modified 33-bus distribution test feeder reflecting the system conditions of South Korea is used for the case study. The results of this case study demonstrates that the proposed control scheme effectively improves the voltage profile of the distribution system. In addition, the proposed scheme reduces the total power loss in the distribution system, which is the sum of the network power loss and curtailed energy, owing to the voltage violation of the solar PV output.

Suggested Citation

  • Se-Heon Lim & Sung-Guk Yoon, 2022. "Dynamic DNR and Solar PV Smart Inverter Control Scheme Using Heterogeneous Multi-Agent Deep Reinforcement Learning," Energies, MDPI, vol. 15(23), pages 1-18, December.
  • Handle: RePEc:gam:jeners:v:15:y:2022:i:23:p:9220-:d:994308
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1996-1073/15/23/9220/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1996-1073/15/23/9220/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Ardi Tampuu & Tambet Matiisen & Dorian Kodelja & Ilya Kuzovkin & Kristjan Korjus & Juhan Aru & Jaan Aru & Raul Vicente, 2017. "Multiagent cooperation and competition with deep reinforcement learning," PLOS ONE, Public Library of Science, vol. 12(4), pages 1-15, April.
    2. Kou, Peng & Liang, Deliang & Wang, Chen & Wu, Zihao & Gao, Lin, 2020. "Safe deep reinforcement learning-based constrained optimal control scheme for active distribution networks," Applied Energy, Elsevier, vol. 264(C).
    3. Ji, Haoran & Wang, Chengshan & Li, Peng & Zhao, Jinli & Song, Guanyu & Ding, Fei & Wu, Jianzhong, 2018. "A centralized-based method to determine the local voltage control strategies of distributed generator operation in active distribution networks," Applied Energy, Elsevier, vol. 228(C), pages 2024-2036.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Qingyan Li & Tao Lin & Qianyi Yu & Hui Du & Jun Li & Xiyue Fu, 2023. "Review of Deep Reinforcement Learning and Its Application in Modern Renewable Power System Control," Energies, MDPI, vol. 16(10), pages 1-23, May.
    2. Mak, Davye & Choeum, Daranith & Choi, Dae-Hyun, 2020. "Sensitivity analysis of volt-VAR optimization to data changes in distribution networks with distributed energy resources," Applied Energy, Elsevier, vol. 261(C).
    3. Oh, Seok Hwa & Yoon, Yong Tae & Kim, Seung Wan, 2020. "Online reconfiguration scheme of self-sufficient distribution network based on a reinforcement learning approach," Applied Energy, Elsevier, vol. 280(C).
    4. Wang, Yi & Qiu, Dawei & Sun, Mingyang & Strbac, Goran & Gao, Zhiwei, 2023. "Secure energy management of multi-energy microgrid: A physical-informed safe reinforcement learning approach," Applied Energy, Elsevier, vol. 335(C).
    5. Emilio Calvano & Giacomo Calzolari & Vincenzo Denicolò & Sergio Pastorello, 2019. "Algorithmic Pricing What Implications for Competition Policy?," Review of Industrial Organization, Springer;The Industrial Organization Society, vol. 55(1), pages 155-171, August.
    6. Zhu, Dafeng & Yang, Bo & Liu, Yuxiang & Wang, Zhaojian & Ma, Kai & Guan, Xinping, 2022. "Energy management based on multi-agent deep reinforcement learning for a multi-energy industrial park," Applied Energy, Elsevier, vol. 311(C).
    7. Young Joon Park & Yoon Sang Cho & Seoung Bum Kim, 2019. "Multi-agent reinforcement learning with approximate model learning for competitive games," PLOS ONE, Public Library of Science, vol. 14(9), pages 1-20, September.
    8. Zhang, Zhengfa & da Silva, Filipe Faria & Guo, Yifei & Bak, Claus Leth & Chen, Zhe, 2021. "Double-layer stochastic model predictive voltage control in active distribution networks with high penetration of renewables," Applied Energy, Elsevier, vol. 302(C).
    9. Lee, Hyun-Rok & Lee, Taesik, 2021. "Multi-agent reinforcement learning algorithm to solve a partially-observable multi-agent problem in disaster response," European Journal of Operational Research, Elsevier, vol. 291(1), pages 296-308.
    10. Christoph Aymanns & Jakob Foerster & Co-Pierre Georg & Matthias Weber, 2022. "Fake News in Social Networks," Swiss Finance Institute Research Paper Series 22-58, Swiss Finance Institute.
    11. Gong, Xun & Wang, Xiaozhe & Cao, Bo, 2023. "On data-driven modeling and control in modern power grids stability: Survey and perspective," Applied Energy, Elsevier, vol. 350(C).
    12. Du, Yan & Zandi, Helia & Kotevska, Olivera & Kurte, Kuldeep & Munk, Jeffery & Amasyali, Kadir & Mckee, Evan & Li, Fangxing, 2021. "Intelligent multi-zone residential HVAC control strategy based on deep reinforcement learning," Applied Energy, Elsevier, vol. 281(C).
    13. Perera, A.T.D. & Kamalaruban, Parameswaran, 2021. "Applications of reinforcement learning in energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 137(C).
    14. Mudhafar Al-Saadi & Maher Al-Greer & Michael Short, 2023. "Reinforcement Learning-Based Intelligent Control Strategies for Optimal Power Management in Advanced Power Distribution Systems: A Survey," Energies, MDPI, vol. 16(4), pages 1-38, February.
    15. Kang, Wenfa & Chen, Minyou & Guan, Yajuan & Wei, Baoze & Vasquez Q., Juan C. & Guerrero, Josep M., 2022. "Event-triggered distributed voltage regulation by heterogeneous BESS in low-voltage distribution networks," Applied Energy, Elsevier, vol. 312(C).
    16. Huy, Phung Dang & Ramachandaramurthy, Vigna K. & Yong, Jia Ying & Tan, Kang Miao & Ekanayake, Janaka B., 2020. "Optimal placement, sizing and power factor of distributed generation: A comprehensive study spanning from the planning stage to the operation stage," Energy, Elsevier, vol. 195(C).
    17. Lu, Renzhi & Li, Yi-Chang & Li, Yuting & Jiang, Junhui & Ding, Yuemin, 2020. "Multi-agent deep reinforcement learning based demand response for discrete manufacturing systems energy management," Applied Energy, Elsevier, vol. 276(C).
    18. Bartłomiej Mroczek & Paweł Pijarski, 2021. "DSO Strategies Proposal for the LV Grid of the Future," Energies, MDPI, vol. 14(19), pages 1-19, October.
    19. Mohammed Alshehri & Jin Yang, 2024. "Voltage Optimization in Active Distribution Networks—Utilizing Analytical and Computational Approaches in High Renewable Energy Penetration Environments," Energies, MDPI, vol. 17(5), pages 1-33, March.
    20. Wang, Licheng & Yan, Ruifeng & Saha, Tapan Kumar, 2019. "Voltage regulation challenges with unbalanced PV integration in low voltage distribution systems and the corresponding solution," Applied Energy, Elsevier, vol. 256(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jeners:v:15:y:2022:i:23:p:9220-:d:994308. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.