IDEAS home Printed from https://ideas.repec.org/a/eee/appene/v353y2024ipas0306261923013983.html
   My bibliography  Save this article

Adaptive optimal secure wind power generation control for variable speed wind turbine systems via reinforcement learning

Author

Listed:
  • Mazare, Mahmood

Abstract

As the utilization of wind energy continues to grow, it is crucial to prioritize the identification of vulnerabilities, raise awareness, and develop strategies for cybersecurity defense. False data injection (FDI) attacks, if targeted at the communication between the rotor speed sensor and the wind turbine (WT) controller, can potentially disrupt the normal operation of the system. These attacks have the capability to overload the drive-train and significantly reduce the power generation efficiency of the wind turbine. So, this note presents an adaptive optimal secure control strategy entailing reinforcement learning (RL) neural network (NN) using the filter error to compensate the detrimental effects of FDI attack as well as actuator fault for WT systems. The Hamilton–Jacobi–Bellman (HJB) equation is constructed and solved to obtain the optimal control policy. Since the HJB is inextricably intertwined with intrinsic nonlinearity and complexity, solving this equation is quiet challenging. To tackle this issue and also approximate the solution of the HJB, an actor–critic-based reinforcement learning (RL) strategy is used, in which actor and critic NNs are applied to execute control action and assess control performance, respectively. To detect FDI attack, an anomaly detection is developed using a nonlinear observer/estimator. Stability analysis is performed using Lyapunov theory which guarantees semi-global uniformly ultimately bounded (SGUUB) of the error signal. Finally, simulation results verify the effectiveness of the proposed control approach.

Suggested Citation

  • Mazare, Mahmood, 2024. "Adaptive optimal secure wind power generation control for variable speed wind turbine systems via reinforcement learning," Applied Energy, Elsevier, vol. 353(PA).
  • Handle: RePEc:eee:appene:v:353:y:2024:i:pa:s0306261923013983
    DOI: 10.1016/j.apenergy.2023.122034
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0306261923013983
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.apenergy.2023.122034?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Tian, Meng & Dong, Zhengcheng & Wang, Xianpei, 2021. "Reinforcement learning approach for robustness analysis of complex networks with incomplete information," Chaos, Solitons & Fractals, Elsevier, vol. 144(C).
    2. Li, Jiawen & Yu, Tao & Zhang, Xiaoshun & Li, Fusheng & Lin, Dan & Zhu, Hanxin, 2021. "Efficient experience replay based deep deterministic policy gradient for AGC dispatch in integrated energy system," Applied Energy, Elsevier, vol. 285(C).
    3. Hosseini, Ehsan & Aghadavoodi, Ehsan & Fernández Ramírez, Luis M., 2020. "Improving response of wind turbines by pitch angle controller based on gain-scheduled recurrent ANFIS type 2 with passive reinforcement learning," Renewable Energy, Elsevier, vol. 157(C), pages 897-910.
    4. Li, Jiawen & Yu, Tao & Yang, Bo, 2021. "A data-driven output voltage control of solid oxide fuel cell using multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 304(C).
    5. Heydari, M.H. & Razzaghi, M., 2021. "A numerical approach for a class of nonlinear optimal control problems with piecewise fractional derivative," Chaos, Solitons & Fractals, Elsevier, vol. 152(C).
    6. Tavakol Aghaei, Vahid & Ağababaoğlu, Arda & Bawo, Biram & Naseradinmousavi, Peiman & Yıldırım, Sinan & Yeşilyurt, Serhat & Onat, Ahmet, 2023. "Energy optimization of wind turbines via a neural control policy based on reinforcement learning Markov chain Monte Carlo algorithm," Applied Energy, Elsevier, vol. 341(C).
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Yin, Linfei & Li, Yu, 2022. "Hybrid multi-agent emotional deep Q network for generation control of multi-area integrated energy systems," Applied Energy, Elsevier, vol. 324(C).
    2. Zhao, Yanwei & Wang, Huanqing & Xu, Ning & Zong, Guangdeng & Zhao, Xudong, 2023. "Reinforcement learning-based decentralized fault tolerant control for constrained interconnected nonlinear systems," Chaos, Solitons & Fractals, Elsevier, vol. 167(C).
    3. Li, Jiawen & Yu, Tao & Zhang, Xiaoshun, 2022. "Coordinated load frequency control of multi-area integrated energy system using multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 306(PA).
    4. Zhang, Bin & Hu, Weihao & Ghias, Amer M.Y.M. & Xu, Xiao & Chen, Zhe, 2022. "Multi-agent deep reinforcement learning-based coordination control for grid-aware multi-buildings," Applied Energy, Elsevier, vol. 328(C).
    5. Kumar Jadoun, Vinay & Rahul Prashanth, G & Suhas Joshi, Siddharth & Narayanan, K. & Malik, Hasmat & García Márquez, Fausto Pedro, 2022. "Optimal fuzzy based economic emission dispatch of combined heat and power units using dynamically controlled Whale Optimization Algorithm," Applied Energy, Elsevier, vol. 315(C).
    6. Pengcheng Ni & Zhiyuan Ye & Can Cao & Zhimin Guo & Jian Zhao & Xing He, 2023. "Cooperative Game-Based Collaborative Optimal Regulation-Assisted Digital Twins for Wide-Area Distributed Energy," Energies, MDPI, vol. 16(6), pages 1-17, March.
    7. Zhu, Ziqing & Hu, Ze & Chan, Ka Wing & Bu, Siqi & Zhou, Bin & Xia, Shiwei, 2023. "Reinforcement learning in deregulated energy market: A comprehensive review," Applied Energy, Elsevier, vol. 329(C).
    8. Zhu, Dafeng & Yang, Bo & Liu, Yuxiang & Wang, Zhaojian & Ma, Kai & Guan, Xinping, 2022. "Energy management based on multi-agent deep reinforcement learning for a multi-energy industrial park," Applied Energy, Elsevier, vol. 311(C).
    9. Fan, Wei & Tan, Qingbo & Zhang, Amin & Ju, Liwei & Wang, Yuwei & Yin, Zhe & Li, Xudong, 2023. "A Bi-level optimization model of integrated energy system considering wind power uncertainty," Renewable Energy, Elsevier, vol. 202(C), pages 973-991.
    10. Li, Jiawen & Zhou, Tao & Keke, He & Yu, Hengwen & Du, Hongwei & Liu, Shuangyu & Cui, Haoyang, 2023. "Distributed quantum multiagent deep meta reinforcement learning for area autonomy energy management of a multiarea microgrid," Applied Energy, Elsevier, vol. 343(C).
    11. Hou, Guolian & Huang, Ting & Zheng, Fumeng & Gong, Linjuan & Huang, Congzhi & Zhang, Jianhua, 2023. "Application of multi-agent EADRC in flexible operation of combined heat and power plant considering carbon emission and economy," Energy, Elsevier, vol. 263(PB).
    12. Li, Jiawen & Yu, Tao & Yang, Bo, 2021. "A data-driven output voltage control of solid oxide fuel cell using multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 304(C).
    13. Homod, Raad Z. & Mohammed, Hayder Ibrahim & Abderrahmane, Aissa & Alawi, Omer A. & Khalaf, Osamah Ibrahim & Mahdi, Jasim M. & Guedri, Kamel & Dhaidan, Nabeel S. & Albahri, A.S. & Sadeq, Abdellatif M. , 2023. "Deep clustering of Lagrangian trajectory for multi-task learning to energy saving in intelligent buildings using cooperative multi-agent," Applied Energy, Elsevier, vol. 351(C).
    14. Ren, Kezheng & Liu, Jun & Liu, Xinglei & Nie, Yongxin, 2023. "Reinforcement Learning-Based Bi-Level strategic bidding model of Gas-fired unit in integrated electricity and natural gas markets preventing market manipulation," Applied Energy, Elsevier, vol. 336(C).
    15. Padullaparthi, Venkata Ramakrishna & Nagarathinam, Srinarayana & Vasan, Arunchandar & Menon, Vishnu & Sudarsanam, Depak, 2022. "FALCON- FArm Level CONtrol for wind turbines using multi-agent deep reinforcement learning," Renewable Energy, Elsevier, vol. 181(C), pages 445-456.
    16. Marzban, Hamid Reza, 2022. "A generalization of Müntz-Legendre polynomials and its implementation in optimal control of nonlinear fractional delay systems," Chaos, Solitons & Fractals, Elsevier, vol. 158(C).
    17. Li, Jiawen & Zhou, Tao, 2023. "Active fault-tolerant coordination energy management for a proton exchange membrane fuel cell using curriculum-based multiagent deep meta-reinforcement learning," Renewable and Sustainable Energy Reviews, Elsevier, vol. 185(C).
    18. Homod, Raad Z. & Togun, Hussein & Kadhim Hussein, Ahmed & Noraldeen Al-Mousawi, Fadhel & Yaseen, Zaher Mundher & Al-Kouz, Wael & Abd, Haider J. & Alawi, Omer A. & Goodarzi, Marjan & Hussein, Omar A., 2022. "Dynamics analysis of a novel hybrid deep clustering for unsupervised learning by reinforcement of multi-agent to energy saving in intelligent buildings," Applied Energy, Elsevier, vol. 313(C).
    19. Lu, Xin & Qiu, Jing & Zhang, Cuo & Lei, Gang & Zhu, Jianguo, 2024. "Seizing unconventional arbitrage opportunities in virtual power plants: A profitable and flexible recruitment approach," Applied Energy, Elsevier, vol. 358(C).
    20. Tu, Haicheng & Gu, Fengqiang & Zhang, Xi & Xia, Yongxiang, 2023. "Robustness analysis of power system under sequential attacks with incomplete information," Reliability Engineering and System Safety, Elsevier, vol. 232(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:353:y:2024:i:pa:s0306261923013983. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.