IDEAS home Printed from https://ideas.repec.org/a/gam/jeners/v16y2023i8p3450-d1123720.html
   My bibliography  Save this article

A Review of Reinforcement Learning-Based Powertrain Controllers: Effects of Agent Selection for Mixed-Continuity Control and Reward Formulation

Author

Listed:
  • Daniel Egan

    (Department of Automotive Engineering, Clemson University, Clemson, SC 29634, USA)

  • Qilun Zhu

    (Department of Automotive Engineering, Clemson University, Clemson, SC 29634, USA)

  • Robert Prucka

    (Department of Automotive Engineering, Clemson University, Clemson, SC 29634, USA)

Abstract

One major cost of improving the automotive fuel economy while simultaneously reducing tailpipe emissions is increased powertrain complexity. This complexity has consequently increased the resources (both time and money) needed to develop such powertrains. Powertrain performance is heavily influenced by the quality of the controller/calibration. Since traditional control development processes are becoming resource-intensive, better alternate methods are worth pursuing. Recently, reinforcement learning (RL), a machine learning technique, has proven capable of creating optimal controllers for complex systems. The model-free nature of RL has the potential to streamline the control development process, possibly reducing the time and money required. This article reviews the impact of choices in two areas on the performance of RL-based powertrain controllers to provide a better awareness of their benefits and consequences. First, we examine how RL algorithm action continuities and control–actuator continuities are matched, via native operation or conversion. Secondly, we discuss the formulation of the reward function. RL is able to optimize control policies defined by a wide spectrum of reward functions, including some functions that are difficult to implement with other techniques. RL action and control–actuator continuity matching affects the ability of the RL-based controller to understand and operate the powertrain while the reward function defines optimal behavior. Finally, opportunities for future RL-based powertrain control development are identified and discussed.

Suggested Citation

  • Daniel Egan & Qilun Zhu & Robert Prucka, 2023. "A Review of Reinforcement Learning-Based Powertrain Controllers: Effects of Agent Selection for Mixed-Continuity Control and Reward Formulation," Energies, MDPI, vol. 16(8), pages 1-31, April.
  • Handle: RePEc:gam:jeners:v:16:y:2023:i:8:p:3450-:d:1123720
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1996-1073/16/8/3450/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1996-1073/16/8/3450/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Lian, Renzong & Peng, Jiankun & Wu, Yuankai & Tan, Huachun & Zhang, Hailong, 2020. "Rule-interposing deep reinforcement learning based energy management strategy for power-split hybrid electric vehicle," Energy, Elsevier, vol. 197(C).
    2. Wu, Peng & Partridge, Julius & Bucknall, Richard, 2020. "Cost-effective reinforcement learning energy management for plug-in hybrid fuel cell and battery ships," Applied Energy, Elsevier, vol. 275(C).
    3. Wang, Xuan & Wang, Rui & Jin, Ming & Shu, Gequn & Tian, Hua & Pan, Jiaying, 2020. "Control of superheat of organic Rankine cycle under transient heat source based on deep reinforcement learning," Applied Energy, Elsevier, vol. 278(C).
    4. Jinquan, Guo & Hongwen, He & Jiankun, Peng & Nana, Zhou, 2019. "A novel MPC-based adaptive energy management strategy in plug-in hybrid electric vehicles," Energy, Elsevier, vol. 175(C), pages 378-392.
    5. Wu, Yuankai & Tan, Huachun & Peng, Jiankun & Zhang, Hailong & He, Hongwen, 2019. "Deep reinforcement learning of energy management with continuous control strategy and traffic information for a series-parallel plug-in hybrid electric bus," Applied Energy, Elsevier, vol. 247(C), pages 454-466.
    6. Zou, Runnan & Fan, Likang & Dong, Yanrui & Zheng, Siyu & Hu, Chenxing, 2021. "DQL energy management: An online-updated algorithm and its application in fix-line hybrid electric vehicle," Energy, Elsevier, vol. 225(C).
    7. Shuai, Bin & Zhou, Quan & Li, Ji & He, Yinglong & Li, Ziyang & Williams, Huw & Xu, Hongming & Shuai, Shijin, 2020. "Heuristic action execution for energy efficient charge-sustaining control of connected hybrid vehicles with model-free double Q-learning," Applied Energy, Elsevier, vol. 267(C).
    8. Zou, Yuan & Liu, Teng & Liu, Dexing & Sun, Fengchun, 2016. "Reinforcement learning-based real-time energy management for a hybrid tracked vehicle," Applied Energy, Elsevier, vol. 171(C), pages 372-382.
    9. Zhou, Quan & Li, Ji & Shuai, Bin & Williams, Huw & He, Yinglong & Li, Ziyang & Xu, Hongming & Yan, Fuwu, 2019. "Multi-step reinforcement learning for model-free predictive energy management of an electrified off-highway vehicle," Applied Energy, Elsevier, vol. 255(C).
    10. Ganesh, Akhil Hannegudda & Xu, Bin, 2022. "A review of reinforcement learning based energy management systems for electrified powertrains: Progress, challenge, and potential solution," Renewable and Sustainable Energy Reviews, Elsevier, vol. 154(C).
    11. Xiong, Rui & Duan, Yanzhou & Cao, Jiayi & Yu, Quanqing, 2018. "Battery and ultracapacitor in-the-loop approach to validate a real-time power management method for an all-climate electric vehicle," Applied Energy, Elsevier, vol. 217(C), pages 153-165.
    12. Chen, Zheng & Hu, Hengjie & Wu, Yitao & Zhang, Yuanjian & Li, Guang & Liu, Yonggang, 2020. "Stochastic model predictive control for energy management of power-split plug-in hybrid electric vehicles based on reinforcement learning," Energy, Elsevier, vol. 211(C).
    13. Wu, Jingda & He, Hongwen & Peng, Jiankun & Li, Yuecheng & Li, Zhanjiang, 2018. "Continuous reinforcement learning of energy management with deep Q network for a power split hybrid electric bus," Applied Energy, Elsevier, vol. 222(C), pages 799-811.
    14. Xiong, Rui & Cao, Jiayi & Yu, Quanqing, 2018. "Reinforcement learning-based real-time power management for hybrid energy storage system in the plug-in hybrid electric vehicle," Applied Energy, Elsevier, vol. 211(C), pages 538-548.
    15. Zhang, Wei & Wang, Jixin & Liu, Yong & Gao, Guangzong & Liang, Siwen & Ma, Hongfeng, 2020. "Reinforcement learning-based intelligent energy management architecture for hybrid construction machinery," Applied Energy, Elsevier, vol. 275(C).
    16. Warren B. Powell, 2009. "What you should know about approximate dynamic programming," Naval Research Logistics (NRL), John Wiley & Sons, vol. 56(3), pages 239-249, April.
    17. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    18. Xu, Bin & Li, Xiaoya, 2021. "A Q-learning based transient power optimization method for organic Rankine cycle waste heat recovery system in heavy duty diesel engine applications," Applied Energy, Elsevier, vol. 286(C).
    19. Xu, Bin & Rathod, Dhruvang & Zhang, Darui & Yebi, Adamu & Zhang, Xueyu & Li, Xiaoya & Filipi, Zoran, 2020. "Parametric study on reinforcement learning optimized energy management strategy for a hybrid electric vehicle," Applied Energy, Elsevier, vol. 259(C).
    20. Qu, Xiaobo & Yu, Yang & Zhou, Mofan & Lin, Chin-Teng & Wang, Xiangyu, 2020. "Jointly dampening traffic oscillations and improving energy consumption with electric, connected and automated vehicles: A reinforcement learning based approach," Applied Energy, Elsevier, vol. 257(C).
    21. Liu, Teng & Wang, Bo & Yang, Chenglang, 2018. "Online Markov Chain-based energy management for a hybrid tracked vehicle with speedy Q-learning," Energy, Elsevier, vol. 160(C), pages 544-555.
    22. Du, Guodong & Zou, Yuan & Zhang, Xudong & Guo, Lingxiong & Guo, Ningyuan, 2022. "Energy management for a hybrid electric vehicle based on prioritized deep reinforcement learning framework," Energy, Elsevier, vol. 241(C).
    23. Han, Xuefeng & He, Hongwen & Wu, Jingda & Peng, Jiankun & Li, Yuecheng, 2019. "Energy management based on reinforcement learning with double deep Q-learning for a hybrid electric tracked vehicle," Applied Energy, Elsevier, vol. 254(C).
    24. Du, Guodong & Zou, Yuan & Zhang, Xudong & Kong, Zehui & Wu, Jinlong & He, Dingbo, 2019. "Intelligent energy management for hybrid electric tracked vehicles using online reinforcement learning," Applied Energy, Elsevier, vol. 251(C), pages 1-1.
    25. Liqiang Jin & Duanyang Tian & Qixiang Zhang & Jingjian Wang, 2020. "Optimal Torque Distribution Control of Multi-Axle Electric Vehicles with In-wheel Motors Based on DDPG Algorithm," Energies, MDPI, vol. 13(6), pages 1-19, March.
    26. Teng Liu & Yuan Zou & Dexing Liu & Fengchun Sun, 2015. "Reinforcement Learning–Based Energy Management Strategy for a Hybrid Electric Tracked Vehicle," Energies, MDPI, vol. 8(7), pages 1-18, July.
    27. Xu, Bin & Rathod, Dhruvang & Yebi, Adamu & Filipi, Zoran, 2020. "Real-time realization of Dynamic Programming using machine learning methods for IC engine waste heat recovery system power optimization," Applied Energy, Elsevier, vol. 262(C).
    28. Li, Yuecheng & He, Hongwen & Khajepour, Amir & Wang, Hong & Peng, Jiankun, 2019. "Energy management for a power-split hybrid electric bus via deep reinforcement learning with terrain information," Applied Energy, Elsevier, vol. 255(C).
    29. Zhou, Jianhao & Xue, Siwu & Xue, Yuan & Liao, Yuhui & Liu, Jun & Zhao, Wanzhong, 2021. "A novel energy management strategy of hybrid electric vehicle via an improved TD3 deep reinforcement learning," Energy, Elsevier, vol. 224(C).
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Dong, Peng & Zhao, Junwei & Liu, Xuewu & Wu, Jian & Xu, Xiangyang & Liu, Yanfang & Wang, Shuhan & Guo, Wei, 2022. "Practical application of energy management strategy for hybrid electric vehicles based on intelligent and connected technologies: Development stages, challenges, and future trends," Renewable and Sustainable Energy Reviews, Elsevier, vol. 170(C).
    2. Liu, Teng & Tan, Wenhao & Tang, Xiaolin & Zhang, Jinwei & Xing, Yang & Cao, Dongpu, 2021. "Driving conditions-driven energy management strategies for hybrid electric vehicles: A review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 151(C).
    3. Yang, Ningkang & Han, Lijin & Xiang, Changle & Liu, Hui & Li, Xunmin, 2021. "An indirect reinforcement learning based real-time energy management strategy via high-order Markov Chain model for a hybrid electric vehicle," Energy, Elsevier, vol. 236(C).
    4. Hu, Dong & Xie, Hui & Song, Kang & Zhang, Yuanyuan & Yan, Long, 2023. "An apprenticeship-reinforcement learning scheme based on expert demonstrations for energy management strategy of hybrid electric vehicles," Applied Energy, Elsevier, vol. 342(C).
    5. Marouane Adnane & Ahmed Khoumsi & João Pedro F. Trovão, 2023. "Efficient Management of Energy Consumption of Electric Vehicles Using Machine Learning—A Systematic and Comprehensive Survey," Energies, MDPI, vol. 16(13), pages 1-39, June.
    6. Zhou, Jianhao & Xue, Siwu & Xue, Yuan & Liao, Yuhui & Liu, Jun & Zhao, Wanzhong, 2021. "A novel energy management strategy of hybrid electric vehicle via an improved TD3 deep reinforcement learning," Energy, Elsevier, vol. 224(C).
    7. Wang, Hanchen & Ye, Yiming & Zhang, Jiangfeng & Xu, Bin, 2023. "A comparative study of 13 deep reinforcement learning based energy management methods for a hybrid electric vehicle," Energy, Elsevier, vol. 266(C).
    8. Matteo Acquarone & Claudio Maino & Daniela Misul & Ezio Spessa & Antonio Mastropietro & Luca Sorrentino & Enrico Busto, 2023. "Influence of the Reward Function on the Selection of Reinforcement Learning Agents for Hybrid Electric Vehicles Real-Time Control," Energies, MDPI, vol. 16(6), pages 1-22, March.
    9. Zhou, Jianhao & Xue, Yuan & Xu, Da & Li, Chaoxiong & Zhao, Wanzhong, 2022. "Self-learning energy management strategy for hybrid electric vehicle via curiosity-inspired asynchronous deep reinforcement learning," Energy, Elsevier, vol. 242(C).
    10. Fuwu Yan & Jinhai Wang & Changqing Du & Min Hua, 2022. "Multi-Objective Energy Management Strategy for Hybrid Electric Vehicles Based on TD3 with Non-Parametric Reward Function," Energies, MDPI, vol. 16(1), pages 1-17, December.
    11. Wang, Yue & Li, Keqiang & Zeng, Xiaohua & Gao, Bolin & Hong, Jichao, 2023. "Investigation of novel intelligent energy management strategies for connected HEB considering global planning of fixed-route information," Energy, Elsevier, vol. 263(PB).
    12. Xiao, Boyi & Yang, Weiwei & Wu, Jiamin & Walker, Paul D. & Zhang, Nong, 2022. "Energy management strategy via maximum entropy reinforcement learning for an extended range logistics vehicle," Energy, Elsevier, vol. 253(C).
    13. Perera, A.T.D. & Kamalaruban, Parameswaran, 2021. "Applications of reinforcement learning in energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 137(C).
    14. Liu, Yonggang & Wu, Yitao & Wang, Xiangyu & Li, Liang & Zhang, Yuanjian & Chen, Zheng, 2023. "Energy management for hybrid electric vehicles based on imitation reinforcement learning," Energy, Elsevier, vol. 263(PC).
    15. Zhang, Hao & Fan, Qinhao & Liu, Shang & Li, Shengbo Eben & Huang, Jin & Wang, Zhi, 2021. "Hierarchical energy management strategy for plug-in hybrid electric powertrain integrated with dual-mode combustion engine," Applied Energy, Elsevier, vol. 304(C).
    16. Zhang, Wei & Wang, Jixin & Xu, Zhenyu & Shen, Yuying & Gao, Guangzong, 2022. "A generalized energy management framework for hybrid construction vehicles via model-based reinforcement learning," Energy, Elsevier, vol. 260(C).
    17. Chen, Zheng & Hu, Hengjie & Wu, Yitao & Zhang, Yuanjian & Li, Guang & Liu, Yonggang, 2020. "Stochastic model predictive control for energy management of power-split plug-in hybrid electric vehicles based on reinforcement learning," Energy, Elsevier, vol. 211(C).
    18. Lian, Renzong & Peng, Jiankun & Wu, Yuankai & Tan, Huachun & Zhang, Hailong, 2020. "Rule-interposing deep reinforcement learning based energy management strategy for power-split hybrid electric vehicle," Energy, Elsevier, vol. 197(C).
    19. Yang, Ningkang & Han, Lijin & Bo, Lin & Liu, Baoshuai & Chen, Xiuqi & Liu, Hui & Xiang, Changle, 2023. "Real-time adaptive energy management for off-road hybrid electric vehicles based on decision-time planning," Energy, Elsevier, vol. 282(C).
    20. Chen, Jiaxin & Shu, Hong & Tang, Xiaolin & Liu, Teng & Wang, Weida, 2022. "Deep reinforcement learning-based multi-objective control of hybrid power system combined with road recognition under time-varying environment," Energy, Elsevier, vol. 239(PC).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jeners:v:16:y:2023:i:8:p:3450-:d:1123720. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.