IDEAS home Printed from https://ideas.repec.org/a/eee/appene/v264y2020ics0306261920302841.html
   My bibliography  Save this article

Safe deep reinforcement learning-based constrained optimal control scheme for active distribution networks

Author

Listed:
  • Kou, Peng
  • Liang, Deliang
  • Wang, Chen
  • Wu, Zihao
  • Gao, Lin

Abstract

Reinforcement learning-based schemes are being recently applied for model-free voltage control in active distribution networks. However, existing reinforcement learning methods face challenges when it comes to continuous state and action spaces problems or problems with operation constraints. To address these limitations, this paper proposes an optimal voltage control scheme based on the safe deep reinforcement learning. In this scheme, the optimal voltage control problem is formulated as a constrained Markov decision process, in which both state and action spaces are continuous. To solve this problem efficiently, the deep deterministic policy gradient algorithm is utilized to learn the reactive power control policies, which determine the optimal control actions from the states. In contrast to existing reinforcement learning methods, deep deterministic policy gradient is naturally capable of addressing control problems with continuous state and action spaces. This is due to the utilization of deep neural networks to approximate both value function and policy. In addition, in order to handle the operation constraints in active distribution networks, a safe exploration approach is proposed to form a safety layer, which is composed directly on top the deep deterministic policy gradient actor network. This safety layer predicts the change in the constrained states and prevents the violation of active distribution networks operation constraints. Numerical simulations on modified IEEE test systems demonstrate that the proposed scheme successfully maintains all bus voltage within the allowed range, and reduces the system loss by 15% compared to the no control case.

Suggested Citation

  • Kou, Peng & Liang, Deliang & Wang, Chen & Wu, Zihao & Gao, Lin, 2020. "Safe deep reinforcement learning-based constrained optimal control scheme for active distribution networks," Applied Energy, Elsevier, vol. 264(C).
  • Handle: RePEc:eee:appene:v:264:y:2020:i:c:s0306261920302841
    DOI: 10.1016/j.apenergy.2020.114772
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0306261920302841
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.apenergy.2020.114772?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Wu, Yuankai & Tan, Huachun & Peng, Jiankun & Zhang, Hailong & He, Hongwen, 2019. "Deep reinforcement learning of energy management with continuous control strategy and traffic information for a series-parallel plug-in hybrid electric bus," Applied Energy, Elsevier, vol. 247(C), pages 454-466.
    2. Vázquez-Canteli, José R. & Nagy, Zoltán, 2019. "Reinforcement learning for demand response: A review of algorithms and modeling techniques," Applied Energy, Elsevier, vol. 235(C), pages 1072-1089.
    3. Qu, Xiaobo & Yu, Yang & Zhou, Mofan & Lin, Chin-Teng & Wang, Xiangyu, 2020. "Jointly dampening traffic oscillations and improving energy consumption with electric, connected and automated vehicles: A reinforcement learning based approach," Applied Energy, Elsevier, vol. 257(C).
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Shen, Rendong & Zhong, Shengyuan & Wen, Xin & An, Qingsong & Zheng, Ruifan & Li, Yang & Zhao, Jun, 2022. "Multi-agent deep reinforcement learning optimization framework for building energy system with renewable energy," Applied Energy, Elsevier, vol. 312(C).
    2. Qingyan Li & Tao Lin & Qianyi Yu & Hui Du & Jun Li & Xiyue Fu, 2023. "Review of Deep Reinforcement Learning and Its Application in Modern Renewable Power System Control," Energies, MDPI, vol. 16(10), pages 1-23, May.
    3. Zhang, Zhengfa & da Silva, Filipe Faria & Guo, Yifei & Bak, Claus Leth & Chen, Zhe, 2021. "Double-layer stochastic model predictive voltage control in active distribution networks with high penetration of renewables," Applied Energy, Elsevier, vol. 302(C).
    4. Oh, Seok Hwa & Yoon, Yong Tae & Kim, Seung Wan, 2020. "Online reconfiguration scheme of self-sufficient distribution network based on a reinforcement learning approach," Applied Energy, Elsevier, vol. 280(C).
    5. Wang, Yi & Qiu, Dawei & Sun, Mingyang & Strbac, Goran & Gao, Zhiwei, 2023. "Secure energy management of multi-energy microgrid: A physical-informed safe reinforcement learning approach," Applied Energy, Elsevier, vol. 335(C).
    6. Yin, Linfei & Lu, Yuejiang, 2021. "Expandable deep width learning for voltage control of three-state energy model based smart grids containing flexible energy sources," Energy, Elsevier, vol. 226(C).
    7. Se-Heon Lim & Sung-Guk Yoon, 2022. "Dynamic DNR and Solar PV Smart Inverter Control Scheme Using Heterogeneous Multi-Agent Deep Reinforcement Learning," Energies, MDPI, vol. 15(23), pages 1-18, December.
    8. Gong, Xun & Wang, Xiaozhe & Cao, Bo, 2023. "On data-driven modeling and control in modern power grids stability: Survey and perspective," Applied Energy, Elsevier, vol. 350(C).
    9. Du, Yan & Zandi, Helia & Kotevska, Olivera & Kurte, Kuldeep & Munk, Jeffery & Amasyali, Kadir & Mckee, Evan & Li, Fangxing, 2021. "Intelligent multi-zone residential HVAC control strategy based on deep reinforcement learning," Applied Energy, Elsevier, vol. 281(C).
    10. Dehghani, Nariman L. & Jeddi, Ashkan B. & Shafieezadeh, Abdollah, 2021. "Intelligent hurricane resilience enhancement of power distribution systems via deep reinforcement learning," Applied Energy, Elsevier, vol. 285(C).
    11. Jude Suchithra & Duane Robinson & Amin Rajabi, 2023. "Hosting Capacity Assessment Strategies and Reinforcement Learning Methods for Coordinated Voltage Control in Electricity Distribution Networks: A Review," Energies, MDPI, vol. 16(5), pages 1-28, March.
    12. Mudhafar Al-Saadi & Maher Al-Greer & Michael Short, 2023. "Reinforcement Learning-Based Intelligent Control Strategies for Optimal Power Management in Advanced Power Distribution Systems: A Survey," Energies, MDPI, vol. 16(4), pages 1-38, February.
    13. Zhu, Dafeng & Yang, Bo & Liu, Yuxiang & Wang, Zhaojian & Ma, Kai & Guan, Xinping, 2022. "Energy management based on multi-agent deep reinforcement learning for a multi-energy industrial park," Applied Energy, Elsevier, vol. 311(C).
    14. Homod, Raad Z. & Togun, Hussein & Kadhim Hussein, Ahmed & Noraldeen Al-Mousawi, Fadhel & Yaseen, Zaher Mundher & Al-Kouz, Wael & Abd, Haider J. & Alawi, Omer A. & Goodarzi, Marjan & Hussein, Omar A., 2022. "Dynamics analysis of a novel hybrid deep clustering for unsupervised learning by reinforcement of multi-agent to energy saving in intelligent buildings," Applied Energy, Elsevier, vol. 313(C).
    15. Lu, Renzhi & Li, Yi-Chang & Li, Yuting & Jiang, Junhui & Ding, Yuemin, 2020. "Multi-agent deep reinforcement learning based demand response for discrete manufacturing systems energy management," Applied Energy, Elsevier, vol. 276(C).

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Daniel Egan & Qilun Zhu & Robert Prucka, 2023. "A Review of Reinforcement Learning-Based Powertrain Controllers: Effects of Agent Selection for Mixed-Continuity Control and Reward Formulation," Energies, MDPI, vol. 16(8), pages 1-31, April.
    2. Juan D. Velásquez & Lorena Cadavid & Carlos J. Franco, 2023. "Intelligence Techniques in Sustainable Energy: Analysis of a Decade of Advances," Energies, MDPI, vol. 16(19), pages 1-45, October.
    3. Xiao, Boyi & Yang, Weiwei & Wu, Jiamin & Walker, Paul D. & Zhang, Nong, 2022. "Energy management strategy via maximum entropy reinforcement learning for an extended range logistics vehicle," Energy, Elsevier, vol. 253(C).
    4. Zhou, Jianhao & Xue, Siwu & Xue, Yuan & Liao, Yuhui & Liu, Jun & Zhao, Wanzhong, 2021. "A novel energy management strategy of hybrid electric vehicle via an improved TD3 deep reinforcement learning," Energy, Elsevier, vol. 224(C).
    5. Perera, A.T.D. & Kamalaruban, Parameswaran, 2021. "Applications of reinforcement learning in energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 137(C).
    6. Lu, Renzhi & Li, Yi-Chang & Li, Yuting & Jiang, Junhui & Ding, Yuemin, 2020. "Multi-agent deep reinforcement learning based demand response for discrete manufacturing systems energy management," Applied Energy, Elsevier, vol. 276(C).
    7. Jin, Ruiyang & Zhou, Yuke & Lu, Chao & Song, Jie, 2022. "Deep reinforcement learning-based strategy for charging station participating in demand response," Applied Energy, Elsevier, vol. 328(C).
    8. Xie, Yunkun & Li, Yangyang & Zhao, Zhichao & Dong, Hao & Wang, Shuqian & Liu, Jingping & Guan, Jinhuan & Duan, Xiongbo, 2020. "Microsimulation of electric vehicle energy consumption and driving range," Applied Energy, Elsevier, vol. 267(C).
    9. Shafqat Jawad & Junyong Liu, 2020. "Electrical Vehicle Charging Services Planning and Operation with Interdependent Power Networks and Transportation Networks: A Review of the Current Scenario and Future Trends," Energies, MDPI, vol. 13(13), pages 1-24, July.
    10. Matteo Acquarone & Claudio Maino & Daniela Misul & Ezio Spessa & Antonio Mastropietro & Luca Sorrentino & Enrico Busto, 2023. "Influence of the Reward Function on the Selection of Reinforcement Learning Agents for Hybrid Electric Vehicles Real-Time Control," Energies, MDPI, vol. 16(6), pages 1-22, March.
    11. Yang, Ningkang & Han, Lijin & Xiang, Changle & Liu, Hui & Li, Xunmin, 2021. "An indirect reinforcement learning based real-time energy management strategy via high-order Markov Chain model for a hybrid electric vehicle," Energy, Elsevier, vol. 236(C).
    12. Gokhale, Gargya & Claessens, Bert & Develder, Chris, 2022. "Physics informed neural networks for control oriented thermal modeling of buildings," Applied Energy, Elsevier, vol. 314(C).
    13. Sun, Hongchang & Niu, Yanlei & Li, Chengdong & Zhou, Changgeng & Zhai, Wenwen & Chen, Zhe & Wu, Hao & Niu, Lanqiang, 2022. "Energy consumption optimization of building air conditioning system via combining the parallel temporal convolutional neural network and adaptive opposition-learning chimp algorithm," Energy, Elsevier, vol. 259(C).
    14. Du, Guodong & Zou, Yuan & Zhang, Xudong & Kong, Zehui & Wu, Jinlong & He, Dingbo, 2019. "Intelligent energy management for hybrid electric tracked vehicles using online reinforcement learning," Applied Energy, Elsevier, vol. 251(C), pages 1-1.
    15. Langer, Lissy & Volling, Thomas, 2022. "A reinforcement learning approach to home energy management for modulating heat pumps and photovoltaic systems," Applied Energy, Elsevier, vol. 327(C).
    16. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    17. Correa-Jullian, Camila & López Droguett, Enrique & Cardemil, José Miguel, 2020. "Operation scheduling in a solar thermal system: A reinforcement learning-based framework," Applied Energy, Elsevier, vol. 268(C).
    18. Ahmad, Tanveer & Madonski, Rafal & Zhang, Dongdong & Huang, Chao & Mujeeb, Asad, 2022. "Data-driven probabilistic machine learning in sustainable smart energy/smart energy systems: Key developments, challenges, and future research opportunities in the context of smart grid paradigm," Renewable and Sustainable Energy Reviews, Elsevier, vol. 160(C).
    19. Du, Guodong & Zou, Yuan & Zhang, Xudong & Liu, Teng & Wu, Jinlong & He, Dingbo, 2020. "Deep reinforcement learning based energy management for a hybrid electric vehicle," Energy, Elsevier, vol. 201(C).
    20. Chen, Zheng & Hu, Hengjie & Wu, Yitao & Zhang, Yuanjian & Li, Guang & Liu, Yonggang, 2020. "Stochastic model predictive control for energy management of power-split plug-in hybrid electric vehicles based on reinforcement learning," Energy, Elsevier, vol. 211(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:264:y:2020:i:c:s0306261920302841. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.