IDEAS home Printed from https://ideas.repec.org/a/gam/jeners/v13y2020i5p1250-d329898.html
   My bibliography  Save this article

Assessing the Use of Reinforcement Learning for Integrated Voltage/Frequency Control in AC Microgrids

Author

Listed:
  • Abdollah Younesi

    (Electrical Engineering Department, Faculty of Engineering, University of Mohaghegh Ardabili, Ardabil 56199-11367, Iran)

  • Hossein Shayeghi

    (Electrical Engineering Department, Faculty of Engineering, University of Mohaghegh Ardabili, Ardabil 56199-11367, Iran)

  • Pierluigi Siano

    (Department of Innovation and Management Systems, University of Salerno, Via Giovanni Paolo II, 132, 84084 Fisciano (SA), Italy)

Abstract

The main purpose of this paper is to present a novel algorithmic reinforcement learning (RL) method for damping the voltage and frequency oscillations in a micro-grid (MG) with penetration of wind turbine generators (WTG). First, the continuous-time environment of the system is discretized to a definite number of states to form the Markov decision process (MDP). To solve the modeled discrete RL-based problem, Q-learning method, which is a model-free and simple iterative solution mechanism is used. Therefore, the presented control strategy is adaptive and it is suitable for the realistic power systems with high nonlinearities. The proposed adaptive RL controller has a supervisory nature that can improve the performance of any kind of controllers by adding an offset signal to the output control signal of them. Here, a part of Denmark distribution system is considered and the dynamic performance of the suggested control mechanism is evaluated and compared with fuzzy-proportional integral derivative (PID) and classical PID controllers. Simulations are carried out in two realistic and challenging scenarios considering system parameters changing. Results indicate that the proposed control strategy has an excellent dynamic response compared to fuzzy-PID and traditional PID controllers for damping the voltage and frequency oscillations.

Suggested Citation

  • Abdollah Younesi & Hossein Shayeghi & Pierluigi Siano, 2020. "Assessing the Use of Reinforcement Learning for Integrated Voltage/Frequency Control in AC Microgrids," Energies, MDPI, vol. 13(5), pages 1-22, March.
  • Handle: RePEc:gam:jeners:v:13:y:2020:i:5:p:1250-:d:329898
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1996-1073/13/5/1250/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1996-1073/13/5/1250/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Lu, Renzhi & Hong, Seung Ho, 2019. "Incentive-based demand response for smart grid with reinforcement learning and deep neural network," Applied Energy, Elsevier, vol. 236(C), pages 937-949.
    2. Wu, Jiajing & Fang, Biaoyan & Fang, Junyuan & Chen, Xi & Tse, Chi K., 2019. "Sequential topology recovery of complex power systems based on reinforcement learning," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 535(C).
    3. Xiong, Rui & Cao, Jiayi & Yu, Quanqing, 2018. "Reinforcement learning-based real-time power management for hybrid energy storage system in the plug-in hybrid electric vehicle," Applied Energy, Elsevier, vol. 211(C), pages 538-548.
    4. Jafari, Mohammad & Malekjamshidi, Zahra, 2020. "Optimal energy management of a residential-based hybrid renewable energy system using rule-based real-time control and 2D dynamic programming optimization method," Renewable Energy, Elsevier, vol. 146(C), pages 254-266.
    5. Hirase, Yuko & Abe, Kensho & Sugimoto, Kazushige & Sakimoto, Kenichi & Bevrani, Hassan & Ise, Toshifumi, 2018. "A novel control approach for virtual synchronous generators to suppress frequency and voltage fluctuations in microgrids," Applied Energy, Elsevier, vol. 210(C), pages 699-710.
    6. Meng, Lexuan & Sanseverino, Eleonora Riva & Luna, Adriana & Dragicevic, Tomislav & Vasquez, Juan C. & Guerrero, Josep M., 2016. "Microgrid supervisory controllers and energy management systems: A literature review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 60(C), pages 1263-1273.
    7. Furqan Asghar & Muhammad Talha & Sung Ho Kim, 2017. "Robust Frequency and Voltage Stability Control Strategy for Standalone AC/DC Hybrid Microgrid," Energies, MDPI, vol. 10(6), pages 1-20, May.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Marcel Nicola & Claudiu-Ionel Nicola & Dan Selișteanu, 2022. "Improvement of the Control of a Grid Connected Photovoltaic System Based on Synergetic and Sliding Mode Controllers Using a Reinforcement Learning Deep Deterministic Policy Gradient Agent," Energies, MDPI, vol. 15(7), pages 1-32, March.
    2. Gong, Xun & Wang, Xiaozhe & Cao, Bo, 2023. "On data-driven modeling and control in modern power grids stability: Survey and perspective," Applied Energy, Elsevier, vol. 350(C).
    3. Marcel Nicola & Claudiu-Ionel Nicola, 2021. "Fractional-Order Control of Grid-Connected Photovoltaic System Based on Synergetic and Sliding Mode Controllers," Energies, MDPI, vol. 14(2), pages 1-25, January.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Wu, Yuankai & Tan, Huachun & Peng, Jiankun & Zhang, Hailong & He, Hongwen, 2019. "Deep reinforcement learning of energy management with continuous control strategy and traffic information for a series-parallel plug-in hybrid electric bus," Applied Energy, Elsevier, vol. 247(C), pages 454-466.
    2. Park, Keonwoo & Moon, Ilkyeong, 2022. "Multi-agent deep reinforcement learning approach for EV charging scheduling in a smart grid," Applied Energy, Elsevier, vol. 328(C).
    3. Charbonnier, Flora & Morstyn, Thomas & McCulloch, Malcolm D., 2022. "Coordination of resources at the edge of the electricity grid: Systematic review and taxonomy," Applied Energy, Elsevier, vol. 318(C).
    4. Yi Kuang & Xiuli Wang & Hongyang Zhao & Yijun Huang & Xianlong Chen & Xifan Wang, 2020. "Agent-Based Energy Sharing Mechanism Using Deep Deterministic Policy Gradient Algorithm," Energies, MDPI, vol. 13(19), pages 1-20, September.
    5. Mir Sayed Shah Danish, 2023. "AI and Expert Insights for Sustainable Energy Future," Energies, MDPI, vol. 16(8), pages 1-27, April.
    6. Álex Omar Topa Gavilema & José Domingo Álvarez & José Luis Torres Moreno & Manuel Pérez García, 2021. "Towards Optimal Management in Microgrids: An Overview," Energies, MDPI, vol. 14(16), pages 1-25, August.
    7. Perera, A.T.D. & Kamalaruban, Parameswaran, 2021. "Applications of reinforcement learning in energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 137(C).
    8. Alessandro Labella & Filip Filipovic & Milutin Petronijevic & Andrea Bonfiglio & Renato Procopio, 2020. "An MPC Approach for Grid-Forming Inverters: Theory and Experiment," Energies, MDPI, vol. 13(9), pages 1-17, May.
    9. Polimeni, Simone & Moretti, Luca & Martelli, Emanuele & Leva, Sonia & Manzolini, Giampaolo, 2023. "A novel stochastic model for flexible unit commitment of off-grid microgrids," Applied Energy, Elsevier, vol. 331(C).
    10. Restrepo, Mauricio & Cañizares, Claudio A. & Simpson-Porco, John W. & Su, Peter & Taruc, John, 2021. "Optimization- and Rule-based Energy Management Systems at the Canadian Renewable Energy Laboratory microgrid facility," Applied Energy, Elsevier, vol. 290(C).
    11. Nyong-Bassey, Bassey Etim & Giaouris, Damian & Patsios, Charalampos & Papadopoulou, Simira & Papadopoulos, Athanasios I. & Walker, Sara & Voutetakis, Spyros & Seferlis, Panos & Gadoue, Shady, 2020. "Reinforcement learning based adaptive power pinch analysis for energy management of stand-alone hybrid energy storage systems considering uncertainty," Energy, Elsevier, vol. 193(C).
    12. Dominique Barth & Benjamin Cohen-Boulakia & Wilfried Ehounou, 2022. "Distributed Reinforcement Learning for the Management of a Smart Grid Interconnecting Independent Prosumers," Energies, MDPI, vol. 15(4), pages 1-19, February.
    13. Zizzo, G. & Beccali, M. & Bonomolo, M. & Di Pietra, B. & Ippolito, M.G. & La Cascia, D. & Leone, G. & Lo Brano, V. & Monteleone, F., 2017. "A feasibility study of some DSM enabling solutions in small islands: The case of Lampedusa," Energy, Elsevier, vol. 140(P1), pages 1030-1046.
    14. Xiong, Rui & Duan, Yanzhou & Cao, Jiayi & Yu, Quanqing, 2018. "Battery and ultracapacitor in-the-loop approach to validate a real-time power management method for an all-climate electric vehicle," Applied Energy, Elsevier, vol. 217(C), pages 153-165.
    15. Yang, Ningkang & Han, Lijin & Xiang, Changle & Liu, Hui & Li, Xunmin, 2021. "An indirect reinforcement learning based real-time energy management strategy via high-order Markov Chain model for a hybrid electric vehicle," Energy, Elsevier, vol. 236(C).
    16. Xiaohan Fang & Jinkuan Wang & Guanru Song & Yinghua Han & Qiang Zhao & Zhiao Cao, 2019. "Multi-Agent Reinforcement Learning Approach for Residential Microgrid Energy Scheduling," Energies, MDPI, vol. 13(1), pages 1-26, December.
    17. Tsoumalis, Georgios I. & Bampos, Zafeirios N. & Biskas, Pandelis N. & Keranidis, Stratos D. & Symeonidis, Polychronis A. & Voulgarakis, Dimitrios K., 2022. "A novel system for providing explicit demand response from domestic natural gas boilers," Applied Energy, Elsevier, vol. 317(C).
    18. Yang, Ting & Zhao, Liyuan & Li, Wei & Zomaya, Albert Y., 2021. "Dynamic energy dispatch strategy for integrated energy system based on improved deep reinforcement learning," Energy, Elsevier, vol. 235(C).
    19. Xu, Xiao & Hu, Weihao & Cao, Di & Liu, Wen & Huang, Qi & Hu, Yanting & Chen, Zhe, 2021. "Enhanced design of an offgrid PV-battery-methanation hybrid energy system for power/gas supply," Renewable Energy, Elsevier, vol. 167(C), pages 440-456.
    20. Mohamed El-Hendawi & Hossam A. Gabbar & Gaber El-Saady & El-Nobi A. Ibrahim, 2018. "Control and EMS of a Grid-Connected Microgrid with Economical Analysis," Energies, MDPI, vol. 11(1), pages 1-20, January.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jeners:v:13:y:2020:i:5:p:1250-:d:329898. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.