IDEAS home Printed from https://ideas.repec.org/a/eee/reensy/v254y2025ipbs0951832024007105.html
   My bibliography  Save this article

A novel sim2real reinforcement learning algorithm for process control

Author

Listed:
  • Liang, Huiping
  • Xie, Junyao
  • Huang, Biao
  • Li, Yonggang
  • Sun, Bei
  • Yang, Chunhua

Abstract

While reinforcement learning (RL) has potential in advanced process control and optimization, its direct interaction with real industrial processes can pose safety concerns. Model-based pre-training of RL may alleviate such risks. However, the intricate nature of industrial processes complicates the establishment of entirely accurate simulation models. Consequently, RL-based controllers relying on simulation models can easily suffer from model-plant mismatch. On the one hand, utilizing offline data for pre-training of RL can also mitigate safety risks. However, it requires well-represented historical datasets. This is demanding because industrial processes mostly run under a regulatory mode with basic controllers. To handle these issues, this paper proposes a novel sim2real reinforcement learning algorithm. First, a state adaptor (SA) is proposed to align simulated states with real states to mitigate the model-plant mismatch. Then, a fix-horizon return is designed to replace traditional infinite-step return to provide genuine labels for the critic network, enhancing learning efficiency and stability. Finally, applying proximal policy optimization (PPO), the SA-PPO method is introduced to implement the proposed sim2real algorithm. Experimental results show that SA-PPO improves performance in MSE by 1.96% and in R by 21.64% on average for roasting process simulation. This verifies the effectiveness of the proposed method.

Suggested Citation

  • Liang, Huiping & Xie, Junyao & Huang, Biao & Li, Yonggang & Sun, Bei & Yang, Chunhua, 2025. "A novel sim2real reinforcement learning algorithm for process control," Reliability Engineering and System Safety, Elsevier, vol. 254(PB).
  • Handle: RePEc:eee:reensy:v:254:y:2025:i:pb:s0951832024007105
    DOI: 10.1016/j.ress.2024.110639
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0951832024007105
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.ress.2024.110639?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Anwar, Ghazanfar Ali & Zhang, Xiaoge, 2024. "Deep reinforcement learning for intelligent risk optimization of buildings under hazard," Reliability Engineering and System Safety, Elsevier, vol. 247(C).
    2. Liu, Lujie & Yang, Jun & Yan, Bingxin, 2024. "A dynamic mission abort policy for transportation systems with stochastic dependence by deep reinforcement learning," Reliability Engineering and System Safety, Elsevier, vol. 241(C).
    3. Rokhforoz, Pegah & Montazeri, Mina & Fink, Olga, 2023. "Safe multi-agent deep reinforcement learning for joint bidding and maintenance scheduling of generation units," Reliability Engineering and System Safety, Elsevier, vol. 232(C).
    4. Salazar, Jean C. & Weber, Philippe & Nejjari, Fatiha & Sarrate, Ramon & Theilliol, Didier, 2017. "System reliability aware Model Predictive Control framework," Reliability Engineering and System Safety, Elsevier, vol. 167(C), pages 663-672.
    5. Mohammadi, Reza & He, Qing, 2022. "A deep reinforcement learning approach for rail renewal and maintenance planning," Reliability Engineering and System Safety, Elsevier, vol. 225(C).
    6. Zhang, Xi & Wang, Qin & Bi, Xiaowen & Li, Donghong & Liu, Dong & Yu, Yuanjin & Tse, Chi Kong, 2024. "Mitigating cascading failure in power grids with deep reinforcement learning-based remedial actions," Reliability Engineering and System Safety, Elsevier, vol. 250(C).
    7. Zhang, Guangming & Zhang, Chao & Wang, Wei & Cao, Huan & Chen, Zhenyu & Niu, Yuguang, 2023. "Offline reinforcement learning control for electricity and heat coordination in a supercritical CHP unit," Energy, Elsevier, vol. 266(C).
    8. Ferreira Neto, Waldomiro Alves & Virgínio Cavalcante, Cristiano Alexandre & Do, Phuc, 2024. "Deep reinforcement learning for maintenance optimization of a scrap-based steel production line," Reliability Engineering and System Safety, Elsevier, vol. 249(C).
    9. Lin, Runze & Luo, Yangyang & Wu, Xialai & Chen, Junghui & Huang, Biao & Su, Hongye & Xie, Lei, 2024. "Surrogate empowered Sim2Real transfer of deep reinforcement learning for ORC superheat control," Applied Energy, Elsevier, vol. 356(C).
    10. Liao, Ruoyu & He, Yihai & Zhang, Jishan & Zheng, Xin & Zhang, Anqi & Zhang, Weifang, 2023. "Reliability proactive control approach based on product key reliability characteristics in manufacturing process," Reliability Engineering and System Safety, Elsevier, vol. 237(C).
    11. Blad, Christian & Bøgh, Simon & Kallesøe, Carsten Skovmose, 2022. "Data-driven Offline Reinforcement Learning for HVAC-systems," Energy, Elsevier, vol. 261(PB).
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Yang, Sen & Zhang, Yi & Lu, Xinzheng & Guo, Wei & Miao, Huiquan, 2024. "Multi-agent deep reinforcement learning based decision support model for resilient community post-hazard recovery," Reliability Engineering and System Safety, Elsevier, vol. 242(C).
    2. Wang, Pengfei & Liang, Wenlong & Gong, Huijun & Chen, Jie, 2024. "Decoupling control of core power and axial power distribution for large pressurized water reactors based on reinforcement learning," Energy, Elsevier, vol. 313(C).
    3. Tseremoglou, Iordanis & Santos, Bruno F., 2024. "Condition-Based Maintenance scheduling of an aircraft fleet under partial observability: A Deep Reinforcement Learning approach," Reliability Engineering and System Safety, Elsevier, vol. 241(C).
    4. Elena Karnoukhova & Anastasia Stepanova & Maria Kokoreva, 2018. "The Influence Of The Ownership Structure On The Performance Of Innovative Companies In The Us," HSE Working papers WP BRP 70/FE/2018, National Research University Higher School of Economics.
    5. Saleh, Ali & Remenyte-Prescott, Rasa & Prescott, Darren & Chiachío, Manuel, 2024. "Intelligent and adaptive asset management model for railway sections using the iPN method," Reliability Engineering and System Safety, Elsevier, vol. 241(C).
    6. Morato, P.G. & Andriotis, C.P. & Papakonstantinou, K.G. & Rigo, P., 2023. "Inference and dynamic decision-making for deteriorating systems with probabilistic dependencies through Bayesian networks and deep reinforcement learning," Reliability Engineering and System Safety, Elsevier, vol. 235(C).
    7. Hou, Guolian & Huang, Ting & Zheng, Fumeng & Huang, Congzhi, 2024. "A hierarchical reinforcement learning GPC for flexible operation of ultra-supercritical unit considering economy," Energy, Elsevier, vol. 289(C).
    8. Lee, Juseong & Mitici, Mihaela, 2023. "Deep reinforcement learning for predictive aircraft maintenance using probabilistic Remaining-Useful-Life prognostics," Reliability Engineering and System Safety, Elsevier, vol. 230(C).
    9. Amin, Md. Tanjin & Khan, Faisal & Imtiaz, Syed, 2018. "Dynamic availability assessment of safety critical systems using a dynamic Bayesian network," Reliability Engineering and System Safety, Elsevier, vol. 178(C), pages 108-117.
    10. Zhuang, Dian & Gan, Vincent J.L. & Duygu Tekler, Zeynep & Chong, Adrian & Tian, Shuai & Shi, Xing, 2023. "Data-driven predictive control for smart HVAC system in IoT-integrated buildings with time-series forecasting and reinforcement learning," Applied Energy, Elsevier, vol. 338(C).
    11. Saleh, Ali & Chiachío, Manuel & Salas, Juan Fernández & Kolios, Athanasios, 2023. "Self-adaptive optimized maintenance of offshore wind turbines by intelligent Petri nets," Reliability Engineering and System Safety, Elsevier, vol. 231(C).
    12. Blad, C. & Bøgh, S. & Kallesøe, C. & Raftery, Paul, 2023. "A laboratory test of an Offline-trained Multi-Agent Reinforcement Learning Algorithm for Heating Systems," Applied Energy, Elsevier, vol. 337(C).
    13. Yin, Xiuxing & Zhao, Xiaowei & Lin, Jin & Karcanias, Aris, 2020. "Reliability aware multi-objective predictive control for wind farm based on machine learning and heuristic optimizations," Energy, Elsevier, vol. 202(C).
    14. Levitin, Gregory & Xing, Liudong & Dai, Yuanshun, 2024. "Optimal attempt scheduling and aborting in heterogenous system performing asynchronous multi-attempt mission," Reliability Engineering and System Safety, Elsevier, vol. 251(C).
    15. Homod, Raad Z. & Mohammed, Hayder Ibrahim & Abderrahmane, Aissa & Alawi, Omer A. & Khalaf, Osamah Ibrahim & Mahdi, Jasim M. & Guedri, Kamel & Dhaidan, Nabeel S. & Albahri, A.S. & Sadeq, Abdellatif M. , 2023. "Deep clustering of Lagrangian trajectory for multi-task learning to energy saving in intelligent buildings using cooperative multi-agent," Applied Energy, Elsevier, vol. 351(C).
    16. Chemweno, Peter & Pintelon, Liliane & Muchiri, Peter Nganga & Van Horenbeek, Adriaan, 2018. "Risk assessment methodologies in maintenance decision making: A review of dependability modelling approaches," Reliability Engineering and System Safety, Elsevier, vol. 173(C), pages 64-77.
    17. Asadzadeh, Seyed Mohammad & Andersen, Nils Axel, 2024. "Optimal operational planning of a bio-fuelled cogeneration plant: Integration of sparse nonlinear dynamics identification and deep reinforcement learning," Applied Energy, Elsevier, vol. 376(PA).
    18. Lee, Jun S. & Yeo, In-Ho & Bae, Younghoon, 2024. "A stochastic track maintenance scheduling model based on deep reinforcement learning approaches," Reliability Engineering and System Safety, Elsevier, vol. 241(C).
    19. Dominik Latoń & Jakub Grela & Andrzej Ożadowicz, 2024. "Applications of Deep Reinforcement Learning for Home Energy Management Systems: A Review," Energies, MDPI, vol. 17(24), pages 1-30, December.
    20. Rokhforoz, Pegah & Montazeri, Mina & Fink, Olga, 2023. "Safe multi-agent deep reinforcement learning for joint bidding and maintenance scheduling of generation units," Reliability Engineering and System Safety, Elsevier, vol. 232(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:reensy:v:254:y:2025:i:pb:s0951832024007105. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: https://www.journals.elsevier.com/reliability-engineering-and-system-safety .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.