IDEAS home Printed from https://ideas.repec.org/a/eee/appene/v390y2025ics0306261925005677.html
   My bibliography  Save this article

An efficient energy management strategy of a hybrid electric unmanned aerial vehicle considering turboshaft engine speed regulation: A deep reinforcement learning approach

Author

Listed:
  • Chen, Yincong
  • Wang, Weida
  • Yang, Chao
  • Liang, Buyuan
  • Liu, Wenjie

Abstract

Low-altitude economy, with its great potential, can be applied widely in different areas and promote the development of various industries. As the main carriers in this strategic emerging economy, large-sized unmanned aerial vehicles (UAVs) keep gaining attention due to their high mobility and broad range of applications. To extend flight range, hybrid electric UAVs are emerging as a promising solution, leveraging energy-saving potential through energy management strategy (EMS). Most research on energy-saving hybrid electric systems either overlooks the importance of turboshaft engine speed regulation or incorporates it into the EMS without considering the dwell time constraints (DTCs) of speed changes. DTCs align with engine speed regulation which are crucial for ensuring the safe operation of the turboshaft engine. Without DTCs, such control strategies may lead to suboptimal real-world performance and even engine failure. However, integrating DTCs into the control problem introduces high nonlinearity and computational complexity which is difficult to solve in an optimal control problem. To address such issues, an efficient reinforcement control strategy is proposed to optimize both energy management and turboshaft engine speed regulation for hybrid electric UAVs. First, a mathematical model of the hybrid powertrain with a turboshaft engine generator set is established. Second, a clipped proximal policy optimization agent is developed to solve the optimal EMS control problem considering engine speed regulation. Especially, an action mapping and a minimum DTC programming method are proposed to enhance convergence and maintain system safety. Third, real-time flight cycles from our prototype UAV are incorporated into the training loop to accurately reflect actual power flow during flight. Finally, the effectiveness and efficiency of the proposed control strategy are validated through simulation. Results demonstrate the effectiveness of the proposed algorithm, which is nearly 95 % close to the globally optimal solution. And the real-time control performance of the proposed strategy is verified through hardware-in-the-loop experiments.

Suggested Citation

  • Chen, Yincong & Wang, Weida & Yang, Chao & Liang, Buyuan & Liu, Wenjie, 2025. "An efficient energy management strategy of a hybrid electric unmanned aerial vehicle considering turboshaft engine speed regulation: A deep reinforcement learning approach," Applied Energy, Elsevier, vol. 390(C).
  • Handle: RePEc:eee:appene:v:390:y:2025:i:c:s0306261925005677
    DOI: 10.1016/j.apenergy.2025.125837
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0306261925005677
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.apenergy.2025.125837?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Hu, Qinru & Hu, Simon & Shen, Shiyu & Ouyang, Yanfeng & Chen, Xiqun (Michael), 2025. "Optimizing autonomous electric taxi operations with integrated mobile charging services: An approximate dynamic programming approach," Applied Energy, Elsevier, vol. 378(PB).
    2. Zhang, Chongbing & Ma, Yue & Li, Zhilin & Han, Lijin & Xiang, Changle & Wei, Zhengchao, 2024. "Fuel-economy-optimal power regulation for a twin-shaft turboshaft engine power generation unit based on high-pressure shaft power injection and variable shaft speed," Energy, Elsevier, vol. 309(C).
    3. Su, Yongxin & Yue, Shuaixian & Qiu, Lei & Chen, Jie & Wang, Rui & Tan, Mao, 2024. "Energy management for scalable battery swapping stations: A deep reinforcement learning and mathematical optimization cascade approach," Applied Energy, Elsevier, vol. 365(C).
    4. Yang, Chao & Lu, Zhexi & Wang, Weida & Wang, Muyao & Zhao, Jing, 2023. "An efficient intelligent energy management strategy based on deep reinforcement learning for hybrid electric flying car," Energy, Elsevier, vol. 280(C).
    5. Wei, Zhengchao & Ma, Yue & Yang, Ningkang & Ruan, Shumin & Xiang, Changle, 2023. "Reinforcement learning based power management integrating economic rotational speed of turboshaft engine and safety constraints of battery for hybrid electric power system," Energy, Elsevier, vol. 263(PB).
    6. Wang, Weida & Chen, Yincong & Yang, Chao & Li, Ying & Xu, Bin & Xiang, Changle, 2022. "An enhanced hypotrochoid spiral optimization algorithm based intertwined optimal sizing and control strategy of a hybrid electric air-ground vehicle," Energy, Elsevier, vol. 257(C).
    7. Chen, Ruihu & Yang, Chao & Ma, Yue & Wang, Weida & Wang, Muyao & Du, Xuelong, 2022. "Online learning predictive power coordinated control strategy for off-road hybrid electric vehicles considering the dynamic response of engine generator set," Applied Energy, Elsevier, vol. 323(C).
    8. Shuo Zhang & Aotian Ma & Teng Zhang & Ning Ge & Xing Huang, 2024. "A Performance Simulation Methodology for a Whole Turboshaft Engine Based on Throughflow Modelling," Energies, MDPI, vol. 17(2), pages 1-20, January.
    9. Heidari, Amirreza & Girardin, Luc & Dorsaz, Cédric & Maréchal, François, 2025. "A trustworthy reinforcement learning framework for autonomous control of a large-scale complex heating system: Simulation and field implementation," Applied Energy, Elsevier, vol. 378(PA).
    10. Yi, Zonggen & Luo, Yusheng & Westover, Tyler & Katikaneni, Sravya & Ponkiya, Binaka & Sah, Suba & Mahmud, Sadab & Raker, David & Javaid, Ahmad & Heben, Michael J. & Khanna, Raghav, 2022. "Deep reinforcement learning based optimization for a tightly coupled nuclear renewable integrated energy system," Applied Energy, Elsevier, vol. 328(C).
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Keerthana Sivamayil & Elakkiya Rajasekar & Belqasem Aljafari & Srete Nikolovski & Subramaniyaswamy Vairavasundaram & Indragandhi Vairavasundaram, 2023. "A Systematic Study on Reinforcement Learning Based Applications," Energies, MDPI, vol. 16(3), pages 1-23, February.
    2. Zhang, Hao & Chen, Boli & Lei, Nuo & Li, Bingbing & Chen, Chaoyi & Wang, Zhi, 2024. "Coupled velocity and energy management optimization of connected hybrid electric vehicles for maximum collective efficiency," Applied Energy, Elsevier, vol. 360(C).
    3. Chen, Yifan & Yang, Liuquan & Yang, Chao & Wang, Weida & Zha, Mingjun & Gao, Pu & Liu, Hui, 2024. "Real-time analytical solution to energy management for hybrid electric vehicles using intelligent driving cycle recognition," Energy, Elsevier, vol. 307(C).
    4. Yang, Zhixue & Ren, Zhouyang & Li, Hui & Sun, Zhiyuan & Feng, Jianbing & Xia, Weiyi, 2024. "A multi-stage stochastic dispatching method for electricity‑hydrogen integrated energy systems driven by model and data," Applied Energy, Elsevier, vol. 371(C).
    5. Hunek, Wojciech P. & Feliks, Tomasz, 2025. "A new set of multivariable predictive control algorithms for time-delayed nonsquare systems of different domains: A minimum-energy examination," Applied Energy, Elsevier, vol. 381(C).
    6. Zhang, Chongbing & Ma, Yue & Li, Zhilin & Han, Lijin & Xiang, Changle & Wei, Zhengchao, 2024. "Fuel-economy-optimal power regulation for a twin-shaft turboshaft engine power generation unit based on high-pressure shaft power injection and variable shaft speed," Energy, Elsevier, vol. 309(C).
    7. Mohammed Gronfula & Khairy Sayed, 2025. "AI-Driven Predictive Control for Dynamic Energy Optimization in Flying Cars," Energies, MDPI, vol. 18(7), pages 1-35, April.
    8. Cui, Feifei & An, Dou & Xi, Huan, 2024. "Integrated energy hub dispatch with a multi-mode CAES–BESS hybrid system: An option-based hierarchical reinforcement learning approach," Applied Energy, Elsevier, vol. 374(C).
    9. Yang, Ting & Wang, Qiancheng & Wang, Xudong & Wang, Lin & Geng, Yinan, 2025. "Low-carbon economic distributed dispatch for district-level integrated energy system considering privacy protection and demand response," Applied Energy, Elsevier, vol. 383(C).
    10. Prabawa, Panggah & Choi, Dae-Hyun, 2024. "Safe deep reinforcement learning-assisted two-stage energy management for active power distribution networks with hydrogen fueling stations," Applied Energy, Elsevier, vol. 375(C).
    11. Serhii Vladov & Maryna Bulakh & Jan Czyżewski & Oleksii Lytvynov & Victoria Vysotska & Victor Vasylenko, 2024. "Method for Helicopter Turboshaft Engines Controlling Energy Characteristics Through Regulating Free Turbine Rotor Speed and Fuel Consumption Based on Neural Networks," Energies, MDPI, vol. 17(22), pages 1-23, November.
    12. Gao, Yuan & Tahir, Mustafa & Siano, Pierluigi & Bi, Yue & Hu, Sile & Yang, Jiaqiang, 2025. "Optimization of renewable energy-based integrated energy systems: A three-stage stochastic robust model," Applied Energy, Elsevier, vol. 377(PD).
    13. Bi, Congbo & Liu, Di & Zhu, Lipeng & Li, Shiyang & Wu, Xiaochen & Lu, Chao, 2025. "Short-term voltage stability emergency control strategy pre-formulation for massive operating scenarios via adversarial reinforcement learning," Applied Energy, Elsevier, vol. 389(C).
    14. Wu, Qingyang & Li, Gen & Liu, Ming & Zhang, Yufeng & Yan, Junjie & Deguchi, Yoshihiro, 2024. "The enhancement of primary frequency regulation ability of combined water and power plant based on nuclear energy: Dynamic modelling and control strategy optimization," Energy, Elsevier, vol. 313(C).
    15. Lin, Xinyou & Ren, Yukun & Xu, Xinhao, 2025. "Stochastic velocity-prediction conscious energy management strategy based self-learning Markov algorithm for a fuel cell hybrid electric vehicle," Energy, Elsevier, vol. 320(C).
    16. Chen, Yan & Zhang, Ruiqian & Lyu, Jiayi & Hou, Yuqi, 2024. "AI and Nuclear: A perfect intersection of danger and potential?," Energy Economics, Elsevier, vol. 133(C).
    17. Yang, Chao & Lu, Zhexi & Wang, Weida & Wang, Muyao & Zhao, Jing, 2023. "An efficient intelligent energy management strategy based on deep reinforcement learning for hybrid electric flying car," Energy, Elsevier, vol. 280(C).
    18. Wang, Shuai & Wu, Xiuheng & Zhao, Xueyan & Wang, Shilong & Xie, Bin & Song, Zhenghe & Wang, Dongqing, 2023. "Co-optimization energy management strategy for a novel dual-motor drive system of electric tractor considering efficiency and stability," Energy, Elsevier, vol. 281(C).
    19. Sushanta Gautam & Austin Szczublewski & Aidan Fox & Sadab Mahmud & Ahmad Javaid & Temitayo O. Olowu & Tyler Westover & Raghav Khanna, 2025. "Digital Real-Time Simulation and Power Quality Analysis of a Hydrogen-Generating Nuclear-Renewable Integrated Energy System," Energies, MDPI, vol. 18(4), pages 1-22, February.
    20. Pampa Sinha & Kaushik Paul & Sanchari Deb & Sulabh Sachan, 2023. "Comprehensive Review Based on the Impact of Integrating Electric Vehicle and Renewable Energy Sources to the Grid," Energies, MDPI, vol. 16(6), pages 1-39, March.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:390:y:2025:i:c:s0306261925005677. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.