IDEAS home Printed from https://ideas.repec.org/a/eee/appene/v390y2025ics0306261925005677.html

An efficient energy management strategy of a hybrid electric unmanned aerial vehicle considering turboshaft engine speed regulation: A deep reinforcement learning approach

Author

Listed:
  • Chen, Yincong
  • Wang, Weida
  • Yang, Chao
  • Liang, Buyuan
  • Liu, Wenjie

Abstract

Low-altitude economy, with its great potential, can be applied widely in different areas and promote the development of various industries. As the main carriers in this strategic emerging economy, large-sized unmanned aerial vehicles (UAVs) keep gaining attention due to their high mobility and broad range of applications. To extend flight range, hybrid electric UAVs are emerging as a promising solution, leveraging energy-saving potential through energy management strategy (EMS). Most research on energy-saving hybrid electric systems either overlooks the importance of turboshaft engine speed regulation or incorporates it into the EMS without considering the dwell time constraints (DTCs) of speed changes. DTCs align with engine speed regulation which are crucial for ensuring the safe operation of the turboshaft engine. Without DTCs, such control strategies may lead to suboptimal real-world performance and even engine failure. However, integrating DTCs into the control problem introduces high nonlinearity and computational complexity which is difficult to solve in an optimal control problem. To address such issues, an efficient reinforcement control strategy is proposed to optimize both energy management and turboshaft engine speed regulation for hybrid electric UAVs. First, a mathematical model of the hybrid powertrain with a turboshaft engine generator set is established. Second, a clipped proximal policy optimization agent is developed to solve the optimal EMS control problem considering engine speed regulation. Especially, an action mapping and a minimum DTC programming method are proposed to enhance convergence and maintain system safety. Third, real-time flight cycles from our prototype UAV are incorporated into the training loop to accurately reflect actual power flow during flight. Finally, the effectiveness and efficiency of the proposed control strategy are validated through simulation. Results demonstrate the effectiveness of the proposed algorithm, which is nearly 95 % close to the globally optimal solution. And the real-time control performance of the proposed strategy is verified through hardware-in-the-loop experiments.

Suggested Citation

  • Chen, Yincong & Wang, Weida & Yang, Chao & Liang, Buyuan & Liu, Wenjie, 2025. "An efficient energy management strategy of a hybrid electric unmanned aerial vehicle considering turboshaft engine speed regulation: A deep reinforcement learning approach," Applied Energy, Elsevier, vol. 390(C).
  • Handle: RePEc:eee:appene:v:390:y:2025:i:c:s0306261925005677
    DOI: 10.1016/j.apenergy.2025.125837
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0306261925005677
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.apenergy.2025.125837?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to

    for a different version of it.

    References listed on IDEAS

    as
    1. Yang, Chao & Lu, Zhexi & Wang, Weida & Wang, Muyao & Zhao, Jing, 2023. "An efficient intelligent energy management strategy based on deep reinforcement learning for hybrid electric flying car," Energy, Elsevier, vol. 280(C).
    2. Wei, Zhengchao & Ma, Yue & Yang, Ningkang & Ruan, Shumin & Xiang, Changle, 2023. "Reinforcement learning based power management integrating economic rotational speed of turboshaft engine and safety constraints of battery for hybrid electric power system," Energy, Elsevier, vol. 263(PB).
    3. Hu, Qinru & Hu, Simon & Shen, Shiyu & Ouyang, Yanfeng & Chen, Xiqun (Michael), 2025. "Optimizing autonomous electric taxi operations with integrated mobile charging services: An approximate dynamic programming approach," Applied Energy, Elsevier, vol. 378(PB).
    4. Wang, Weida & Chen, Yincong & Yang, Chao & Li, Ying & Xu, Bin & Xiang, Changle, 2022. "An enhanced hypotrochoid spiral optimization algorithm based intertwined optimal sizing and control strategy of a hybrid electric air-ground vehicle," Energy, Elsevier, vol. 257(C).
    5. Chen, Ruihu & Yang, Chao & Ma, Yue & Wang, Weida & Wang, Muyao & Du, Xuelong, 2022. "Online learning predictive power coordinated control strategy for off-road hybrid electric vehicles considering the dynamic response of engine generator set," Applied Energy, Elsevier, vol. 323(C).
    6. Zhang, Chongbing & Ma, Yue & Li, Zhilin & Han, Lijin & Xiang, Changle & Wei, Zhengchao, 2024. "Fuel-economy-optimal power regulation for a twin-shaft turboshaft engine power generation unit based on high-pressure shaft power injection and variable shaft speed," Energy, Elsevier, vol. 309(C).
    7. Shuo Zhang & Aotian Ma & Teng Zhang & Ning Ge & Xing Huang, 2024. "A Performance Simulation Methodology for a Whole Turboshaft Engine Based on Throughflow Modelling," Energies, MDPI, vol. 17(2), pages 1-20, January.
    8. Heidari, Amirreza & Girardin, Luc & Dorsaz, Cédric & Maréchal, François, 2025. "A trustworthy reinforcement learning framework for autonomous control of a large-scale complex heating system: Simulation and field implementation," Applied Energy, Elsevier, vol. 378(PA).
    9. Yi, Zonggen & Luo, Yusheng & Westover, Tyler & Katikaneni, Sravya & Ponkiya, Binaka & Sah, Suba & Mahmud, Sadab & Raker, David & Javaid, Ahmad & Heben, Michael J. & Khanna, Raghav, 2022. "Deep reinforcement learning based optimization for a tightly coupled nuclear renewable integrated energy system," Applied Energy, Elsevier, vol. 328(C).
    10. Su, Yongxin & Yue, Shuaixian & Qiu, Lei & Chen, Jie & Wang, Rui & Tan, Mao, 2024. "Energy management for scalable battery swapping stations: A deep reinforcement learning and mathematical optimization cascade approach," Applied Energy, Elsevier, vol. 365(C).
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Tang, Jun, 2026. "Can low altitude economy development bring economic and environmental dividends? --evidence from Chinese cities," Journal of Air Transport Management, Elsevier, vol. 131(C).

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Yang, Chao & Kan, Sibo & Wang, Weida & Wang, Muyao & Zha, Mingjun & Yang, Liuquan & Yan, Qingdong, 2025. "Extreme-environment-aware adaptive energy management strategy for heavy-duty series hybrid electric vehicles based on data-driven method," Energy, Elsevier, vol. 340(C).
    2. Keerthana Sivamayil & Elakkiya Rajasekar & Belqasem Aljafari & Srete Nikolovski & Subramaniyaswamy Vairavasundaram & Indragandhi Vairavasundaram, 2023. "A Systematic Study on Reinforcement Learning Based Applications," Energies, MDPI, vol. 16(3), pages 1-23, February.
    3. Lin, Xinyou & Ren, Yukun & Xu, Xinhao, 2025. "Stochastic velocity-prediction conscious energy management strategy based self-learning Markov algorithm for a fuel cell hybrid electric vehicle," Energy, Elsevier, vol. 320(C).
    4. Zhang, Hao & Chen, Boli & Lei, Nuo & Li, Bingbing & Chen, Chaoyi & Wang, Zhi, 2024. "Coupled velocity and energy management optimization of connected hybrid electric vehicles for maximum collective efficiency," Applied Energy, Elsevier, vol. 360(C).
    5. Qi, Qi & Zhang, Deying & Hu, Xiang & Li, Xiao & Qi, Bing, 2025. "An optimal dispatch strategy for 5G base stations equipped with battery swapping cabinets," Applied Energy, Elsevier, vol. 392(C).
    6. Chen, Yan & Zhang, Ruiqian & Lyu, Jiayi & Hou, Yuqi, 2024. "AI and Nuclear: A perfect intersection of danger and potential?," Energy Economics, Elsevier, vol. 133(C).
    7. Chen, Yifan & Yang, Liuquan & Yang, Chao & Wang, Weida & Zha, Mingjun & Gao, Pu & Liu, Hui, 2024. "Real-time analytical solution to energy management for hybrid electric vehicles using intelligent driving cycle recognition," Energy, Elsevier, vol. 307(C).
    8. Yang, Chao & Lu, Zhexi & Wang, Weida & Wang, Muyao & Zhao, Jing, 2023. "An efficient intelligent energy management strategy based on deep reinforcement learning for hybrid electric flying car," Energy, Elsevier, vol. 280(C).
    9. Wang, Shuai & Wu, Xiuheng & Zhao, Xueyan & Wang, Shilong & Xie, Bin & Song, Zhenghe & Wang, Dongqing, 2023. "Co-optimization energy management strategy for a novel dual-motor drive system of electric tractor considering efficiency and stability," Energy, Elsevier, vol. 281(C).
    10. Yang, Zhixue & Ren, Zhouyang & Li, Hui & Sun, Zhiyuan & Feng, Jianbing & Xia, Weiyi, 2024. "A multi-stage stochastic dispatching method for electricity‑hydrogen integrated energy systems driven by model and data," Applied Energy, Elsevier, vol. 371(C).
    11. Chen, Jiawen & Zou, Yuan & Zhang, Jun & Zhang, Xudong & Du, Guodong & Meng, Yihao, 2026. "Optimal energy management at solar-integrated hubs for e-taxis with battery swapping and fast charging under power network constraints," Applied Energy, Elsevier, vol. 402(PB).
    12. Sushanta Gautam & Austin Szczublewski & Aidan Fox & Sadab Mahmud & Ahmad Javaid & Temitayo O. Olowu & Tyler Westover & Raghav Khanna, 2025. "Digital Real-Time Simulation and Power Quality Analysis of a Hydrogen-Generating Nuclear-Renewable Integrated Energy System," Energies, MDPI, vol. 18(4), pages 1-22, February.
    13. Pampa Sinha & Kaushik Paul & Sanchari Deb & Sulabh Sachan, 2023. "Comprehensive Review Based on the Impact of Integrating Electric Vehicle and Renewable Energy Sources to the Grid," Energies, MDPI, vol. 16(6), pages 1-39, March.
    14. Liu, Ming & Hao, Han & Sun, Xin & Qu, Xiaobo & Wang, Kai & Qian, Yuping & Hao, Xu & Xun, Dengye & Geng, Jingxuan & Dou, Hao & Deng, Yunfeng & Du, Shilong & Liu, Zongwei & Zhao, Fuquan, 2024. "Exploring the key technologies needed for the commercialization of electric flying cars: A levelized cost and profitability analysis," Energy, Elsevier, vol. 303(C).
    15. Hunek, Wojciech P. & Feliks, Tomasz, 2025. "A new set of multivariable predictive control algorithms for time-delayed nonsquare systems of different domains: A minimum-energy examination," Applied Energy, Elsevier, vol. 381(C).
    16. Zhang, Chongbing & Ma, Yue & Li, Zhilin & Han, Lijin & Xiang, Changle & Wei, Zhengchao, 2024. "Fuel-economy-optimal power regulation for a twin-shaft turboshaft engine power generation unit based on high-pressure shaft power injection and variable shaft speed," Energy, Elsevier, vol. 309(C).
    17. Xu, Jinghui & Yang, Kaiqiang & Wang, Zepeng & Wang, Xizhen & Li, Xueshun & Zhao, Yongjun, 2025. "Thermodynamic performance and decoupling characteristics analysis of a dual-shaft hybrid propulsion system integrated solid oxide fuel cell for commercial aircraft," Applied Energy, Elsevier, vol. 391(C).
    18. Cui, Feifei & An, Dou & Xi, Huan & Ren, Zhigang, 2025. "Collaborative scheduling optimization of hydrogen-enhanced integrated energy system via goal-conditioned hierarchical reinforcement learning," Energy, Elsevier, vol. 338(C).
    19. Mohammed Gronfula & Khairy Sayed, 2025. "AI-Driven Predictive Control for Dynamic Energy Optimization in Flying Cars," Energies, MDPI, vol. 18(7), pages 1-35, April.
    20. Cui, Feifei & An, Dou & Xi, Huan, 2024. "Integrated energy hub dispatch with a multi-mode CAES–BESS hybrid system: An option-based hierarchical reinforcement learning approach," Applied Energy, Elsevier, vol. 374(C).

    More about this item

    Keywords

    ;
    ;
    ;
    ;
    ;

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:390:y:2025:i:c:s0306261925005677. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.