IDEAS home Printed from https://ideas.repec.org/a/eee/energy/v253y2022ics0360544222010088.html
   My bibliography  Save this article

Energy management strategy via maximum entropy reinforcement learning for an extended range logistics vehicle

Author

Listed:
  • Xiao, Boyi
  • Yang, Weiwei
  • Wu, Jiamin
  • Walker, Paul D.
  • Zhang, Nong

Abstract

The modern energy management strategy (EMS) plays a vital role in the energy efficiency of the extended range electric vehicle. However, some modern strategies such as model predictive control (MPC) and dynamic programming (DP) have limited practical potential because they are subject to the pre-known environment information and noise interference. The reinforcement learning (RL)control strategy can be adopted as online control to interact with the vehicle and the environment. In this study, a novel auxiliary power unit (APU) charging strategy with multi-object optimization is proposed to achieve high fuel conversion efficiency while maintaining battery charging health. The state-of-the-art algorithm, Soft Actor-Critic (SAC), is applied to achieve better exploration of the possible APU behaviour and solve the sensitivity and poor convergence problems from the current RL studies. Its performance is further verified by the results of the Deep Deterministic Policy Gradient (DDPG) algorithm and DP. Three innovative targets are selected as the RL rewards for optimization: the engine fuel rate, SOC charging trajectory, and the battery charging rate (C-rate). The first adoption of the battery C-rate monitoring in RL-based energy management strategy helps extend the battery lifespan from excessive discharge. The comparative results show that the SAC had a 36% faster convergence speed than DDPG while providing a smoother and more stable action space. The fuel consumption with SAC also outplays that of DDPG by around 3%, which achieves almost 95% of the global optimization result. The successful deployment of the SAC algorithm as an EMS indicates its standout ability in dealing with wide-range actions and states with high randomness, revealing the practical potential compared with the existing RL strategies.

Suggested Citation

  • Xiao, Boyi & Yang, Weiwei & Wu, Jiamin & Walker, Paul D. & Zhang, Nong, 2022. "Energy management strategy via maximum entropy reinforcement learning for an extended range logistics vehicle," Energy, Elsevier, vol. 253(C).
  • Handle: RePEc:eee:energy:v:253:y:2022:i:c:s0360544222010088
    DOI: 10.1016/j.energy.2022.124105
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0360544222010088
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.energy.2022.124105?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Jen-Chiun Guan & Bo-Chiuan Chen & Yuh-Yih Wu, 2019. "Design of an Adaptive Power Management Strategy for Range Extended Electric Vehicles," Energies, MDPI, vol. 12(9), pages 1-24, April.
    2. Lian, Renzong & Peng, Jiankun & Wu, Yuankai & Tan, Huachun & Zhang, Hailong, 2020. "Rule-interposing deep reinforcement learning based energy management strategy for power-split hybrid electric vehicle," Energy, Elsevier, vol. 197(C).
    3. Vázquez-Canteli, José R. & Nagy, Zoltán, 2019. "Reinforcement learning for demand response: A review of algorithms and modeling techniques," Applied Energy, Elsevier, vol. 235(C), pages 1072-1089.
    4. Liu, Teng & Wang, Bo & Yang, Chenglang, 2018. "Online Markov Chain-based energy management for a hybrid tracked vehicle with speedy Q-learning," Energy, Elsevier, vol. 160(C), pages 544-555.
    5. Wu, Jingda & He, Hongwen & Peng, Jiankun & Li, Yuecheng & Li, Zhanjiang, 2018. "Continuous reinforcement learning of energy management with deep Q network for a power split hybrid electric bus," Applied Energy, Elsevier, vol. 222(C), pages 799-811.
    6. Han, Xuefeng & He, Hongwen & Wu, Jingda & Peng, Jiankun & Li, Yuecheng, 2019. "Energy management based on reinforcement learning with double deep Q-learning for a hybrid electric tracked vehicle," Applied Energy, Elsevier, vol. 254(C).
    7. Xiao, B. & Ruan, J. & Yang, W. & Walker, P.D. & Zhang, N., 2021. "A review of pivotal energy management strategies for extended range electric vehicles," Renewable and Sustainable Energy Reviews, Elsevier, vol. 149(C).
    8. Yang, Yalian & Hu, Xiaosong & Pei, Huanxin & Peng, Zhiyuan, 2016. "Comparison of power-split and parallel hybrid powertrain architectures with a single electric machine: Dynamic programming approach," Applied Energy, Elsevier, vol. 168(C), pages 683-690.
    9. Zuo, Hongyan & Zhang, Bin & Huang, Zhonghua & Wei, Kexiang & Zhu, Hong & Tan, Jiqiu, 2022. "Effect analysis on SOC values of the power lithium manganate battery during discharging process and its intelligent estimation," Energy, Elsevier, vol. 238(PB).
    10. Wu, Yuankai & Tan, Huachun & Peng, Jiankun & Zhang, Hailong & He, Hongwen, 2019. "Deep reinforcement learning of energy management with continuous control strategy and traffic information for a series-parallel plug-in hybrid electric bus," Applied Energy, Elsevier, vol. 247(C), pages 454-466.
    11. Li, Yuecheng & He, Hongwen & Khajepour, Amir & Wang, Hong & Peng, Jiankun, 2019. "Energy management for a power-split hybrid electric bus via deep reinforcement learning with terrain information," Applied Energy, Elsevier, vol. 255(C).
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Liang, Zhaowen & Ruan, Jiageng & Wang, Zhenpo & Liu, Kai & Li, Bin, 2024. "Soft actor-critic-based EMS design for dual motor battery electric bus," Energy, Elsevier, vol. 288(C).
    2. Kunyu Wang & Rong Yang & Yongjian Zhou & Wei Huang & Song Zhang, 2022. "Design and Improvement of SD3-Based Energy Management Strategy for a Hybrid Electric Urban Bus," Energies, MDPI, vol. 15(16), pages 1-21, August.
    3. He, Hongwen & Su, Qicong & Huang, Ruchen & Niu, Zegong, 2024. "Enabling intelligent transferable energy management of series hybrid electric tracked vehicle across motion dimensions via soft actor-critic algorithm," Energy, Elsevier, vol. 294(C).
    4. Yang, Xiaofeng & He, Hongwen & Wei, Zhongbao & Wang, Rui & Xu, Ke & Zhang, Dong, 2023. "Enabling Safety-Enhanced fast charging of electric vehicles via soft actor Critic-Lagrange DRL algorithm in a Cyber-Physical system," Applied Energy, Elsevier, vol. 329(C).

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Daniel Egan & Qilun Zhu & Robert Prucka, 2023. "A Review of Reinforcement Learning-Based Powertrain Controllers: Effects of Agent Selection for Mixed-Continuity Control and Reward Formulation," Energies, MDPI, vol. 16(8), pages 1-31, April.
    2. Liu, Teng & Tan, Wenhao & Tang, Xiaolin & Zhang, Jinwei & Xing, Yang & Cao, Dongpu, 2021. "Driving conditions-driven energy management strategies for hybrid electric vehicles: A review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 151(C).
    3. Yang, Ningkang & Han, Lijin & Xiang, Changle & Liu, Hui & Li, Xunmin, 2021. "An indirect reinforcement learning based real-time energy management strategy via high-order Markov Chain model for a hybrid electric vehicle," Energy, Elsevier, vol. 236(C).
    4. Matteo Acquarone & Claudio Maino & Daniela Misul & Ezio Spessa & Antonio Mastropietro & Luca Sorrentino & Enrico Busto, 2023. "Influence of the Reward Function on the Selection of Reinforcement Learning Agents for Hybrid Electric Vehicles Real-Time Control," Energies, MDPI, vol. 16(6), pages 1-22, March.
    5. Huang, Ruchen & He, Hongwen & Zhao, Xuyang & Wang, Yunlong & Li, Menglin, 2022. "Battery health-aware and naturalistic data-driven energy management for hybrid electric bus based on TD3 deep reinforcement learning algorithm," Applied Energy, Elsevier, vol. 321(C).
    6. Xiao, B. & Ruan, J. & Yang, W. & Walker, P.D. & Zhang, N., 2021. "A review of pivotal energy management strategies for extended range electric vehicles," Renewable and Sustainable Energy Reviews, Elsevier, vol. 149(C).
    7. Dong, Peng & Zhao, Junwei & Liu, Xuewu & Wu, Jian & Xu, Xiangyang & Liu, Yanfang & Wang, Shuhan & Guo, Wei, 2022. "Practical application of energy management strategy for hybrid electric vehicles based on intelligent and connected technologies: Development stages, challenges, and future trends," Renewable and Sustainable Energy Reviews, Elsevier, vol. 170(C).
    8. Yang, Dongpo & Liu, Tong & Song, Dafeng & Zhang, Xuanming & Zeng, Xiaohua, 2023. "A real time multi-objective optimization Guided-MPC strategy for power-split hybrid electric bus based on velocity prediction," Energy, Elsevier, vol. 276(C).
    9. Wang, Yue & Li, Keqiang & Zeng, Xiaohua & Gao, Bolin & Hong, Jichao, 2023. "Investigation of novel intelligent energy management strategies for connected HEB considering global planning of fixed-route information," Energy, Elsevier, vol. 263(PB).
    10. Feng, Zhiyan & Zhang, Qingang & Zhang, Yiming & Fei, Liangyu & Jiang, Fei & Zhao, Shengdun, 2024. "Practicability analysis of online deep reinforcement learning towards energy management strategy of 4WD-BEVs driven by dual-motor in-wheel motors," Energy, Elsevier, vol. 290(C).
    11. Zhou, Jianhao & Xue, Siwu & Xue, Yuan & Liao, Yuhui & Liu, Jun & Zhao, Wanzhong, 2021. "A novel energy management strategy of hybrid electric vehicle via an improved TD3 deep reinforcement learning," Energy, Elsevier, vol. 224(C).
    12. Geng, Wenran & Lou, Diming & Wang, Chen & Zhang, Tong, 2020. "A cascaded energy management optimization method of multimode power-split hybrid electric vehicles," Energy, Elsevier, vol. 199(C).
    13. Chen, Jiaxin & Shu, Hong & Tang, Xiaolin & Liu, Teng & Wang, Weida, 2022. "Deep reinforcement learning-based multi-objective control of hybrid power system combined with road recognition under time-varying environment," Energy, Elsevier, vol. 239(PC).
    14. Fengqi Zhang & Lihua Wang & Serdar Coskun & Hui Pang & Yahui Cui & Junqiang Xi, 2020. "Energy Management Strategies for Hybrid Electric Vehicles: Review, Classification, Comparison, and Outlook," Energies, MDPI, vol. 13(13), pages 1-35, June.
    15. Kunyu Wang & Rong Yang & Yongjian Zhou & Wei Huang & Song Zhang, 2022. "Design and Improvement of SD3-Based Energy Management Strategy for a Hybrid Electric Urban Bus," Energies, MDPI, vol. 15(16), pages 1-21, August.
    16. Ramya Kuppusamy & Srete Nikolovski & Yuvaraja Teekaraman, 2023. "Review of Machine Learning Techniques for Power Quality Performance Evaluation in Grid-Connected Systems," Sustainability, MDPI, vol. 15(20), pages 1-29, October.
    17. Perera, A.T.D. & Kamalaruban, Parameswaran, 2021. "Applications of reinforcement learning in energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 137(C).
    18. Yang, Ningkang & Ruan, Shumin & Han, Lijin & Liu, Hui & Guo, Lingxiong & Xiang, Changle, 2023. "Reinforcement learning-based real-time intelligent energy management for hybrid electric vehicles in a model predictive control framework," Energy, Elsevier, vol. 270(C).
    19. Alessia Musa & Pier Giuseppe Anselma & Giovanni Belingardi & Daniela Anna Misul, 2023. "Energy Management in Hybrid Electric Vehicles: A Q-Learning Solution for Enhanced Drivability and Energy Efficiency," Energies, MDPI, vol. 17(1), pages 1-20, December.
    20. Qi, Chunyang & Song, Chuanxue & Xiao, Feng & Song, Shixin, 2022. "Generalization ability of hybrid electric vehicle energy management strategy based on reinforcement learning method," Energy, Elsevier, vol. 250(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:energy:v:253:y:2022:i:c:s0360544222010088. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.journals.elsevier.com/energy .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.