IDEAS home Printed from https://ideas.repec.org/a/eee/reensy/v241y2024ics0951832023006233.html
   My bibliography  Save this article

A stochastic track maintenance scheduling model based on deep reinforcement learning approaches

Author

Listed:
  • Lee, Jun S.
  • Yeo, In-Ho
  • Bae, Younghoon

Abstract

A data-driven railway track maintenance scheduling framework based on a stochastic track deterioration model and deep reinforcement learning approaches is proposed. Various track conditions such as track geometry and the support capacity of the infrastructure are considered in estimating the track deterioration rate and the track quality index obtained thereby is used to predict the state of each track segment. Further, the framework incorporates additional field-specific constraints including the number of tampings and the latest maintenance time of ballasted track are also introduced to account for the field conditions as accurately as possible. From these conditions, the optimal maintenance action for each track segment is determined based on the combined constraints of cost and ride comfort. In the present study, two reinforcement learning (RL) models, namely the Duel Deep Q Network (DuDQN) and Asynchronous Advantage Actor Critic (A3C) models, were employed to establish a decision support system of track maintenance, and the models’ advantages and disadvantages were compared. Field application of the models was conducted based on field maintenance data, and the DuDQN model was found to be more suitable in our case. The optimal number of tampings before renewal was determined from the maintenance costs and field conditions, and the cost effect of ride comfort was investigated using the proposed deep RL model. Finally, possible improvements to the models were explored and are briefly outlined herein.

Suggested Citation

  • Lee, Jun S. & Yeo, In-Ho & Bae, Younghoon, 2024. "A stochastic track maintenance scheduling model based on deep reinforcement learning approaches," Reliability Engineering and System Safety, Elsevier, vol. 241(C).
  • Handle: RePEc:eee:reensy:v:241:y:2024:i:c:s0951832023006233
    DOI: 10.1016/j.ress.2023.109709
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0951832023006233
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.ress.2023.109709?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Andriotis, C.P. & Papakonstantinou, K.G., 2019. "Managing engineering systems with large state and action spaces through deep reinforcement learning," Reliability Engineering and System Safety, Elsevier, vol. 191(C).
    2. Mohammadi, Reza & He, Qing, 2022. "A deep reinforcement learning approach for rail renewal and maintenance planning," Reliability Engineering and System Safety, Elsevier, vol. 225(C).
    3. Nguyen, Van-Thai & Do, Phuc & Vosin, Alexandre & Iung, Benoit, 2022. "Artificial-intelligence-based maintenance decision-making and optimization for multi-state component systems," Reliability Engineering and System Safety, Elsevier, vol. 228(C).
    4. Mohammadi, Reza & He, Qing & Karwan, Mark, 2021. "Data-driven robust strategies for joint optimization of rail renewal and maintenance planning," Omega, Elsevier, vol. 103(C).
    5. Sedghi, Mahdieh & Kauppila, Osmo & Bergquist, Bjarne & Vanhatalo, Erik & Kulahci, Murat, 2021. "A taxonomy of railway track maintenance planning and scheduling: A review and research trends," Reliability Engineering and System Safety, Elsevier, vol. 215(C).
    6. Bressi, Sara & Santos, João & Losa, Massimo, 2021. "Optimization of maintenance strategies for railway track-bed considering probabilistic degradation models and different reliability levels," Reliability Engineering and System Safety, Elsevier, vol. 207(C).
    7. Liu, Yu & Chen, Yiming & Jiang, Tao, 2020. "Dynamic selective maintenance optimization for multi-state systems over a finite horizon: A deep reinforcement learning approach," European Journal of Operational Research, Elsevier, vol. 283(1), pages 166-181.
    8. Morato, P.G. & Andriotis, C.P. & Papakonstantinou, K.G. & Rigo, P., 2023. "Inference and dynamic decision-making for deteriorating systems with probabilistic dependencies through Bayesian networks and deep reinforcement learning," Reliability Engineering and System Safety, Elsevier, vol. 235(C).
    9. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    10. Pinciroli, Luca & Baraldi, Piero & Zio, Enrico, 2023. "Maintenance optimization in industry 4.0," Reliability Engineering and System Safety, Elsevier, vol. 234(C).
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Lee, Dongkyu & Song, Junho, 2023. "Risk-informed operation and maintenance of complex lifeline systems using parallelized multi-agent deep Q-network," Reliability Engineering and System Safety, Elsevier, vol. 239(C).
    2. Najafi, Seyedvahid & Lee, Chi-Guhn, 2023. "A deep reinforcement learning approach for repair-based maintenance of multi-unit systems using proportional hazards model," Reliability Engineering and System Safety, Elsevier, vol. 234(C).
    3. Mohammadi, Reza & He, Qing, 2022. "A deep reinforcement learning approach for rail renewal and maintenance planning," Reliability Engineering and System Safety, Elsevier, vol. 225(C).
    4. Pinciroli, Luca & Baraldi, Piero & Zio, Enrico, 2023. "Maintenance optimization in industry 4.0," Reliability Engineering and System Safety, Elsevier, vol. 234(C).
    5. Morato, P.G. & Andriotis, C.P. & Papakonstantinou, K.G. & Rigo, P., 2023. "Inference and dynamic decision-making for deteriorating systems with probabilistic dependencies through Bayesian networks and deep reinforcement learning," Reliability Engineering and System Safety, Elsevier, vol. 235(C).
    6. Zhou, Yifan & Li, Bangcheng & Lin, Tian Ran, 2022. "Maintenance optimisation of multicomponent systems using hierarchical coordinated reinforcement learning," Reliability Engineering and System Safety, Elsevier, vol. 217(C).
    7. Liu, Lujie & Yang, Jun, 2023. "A dynamic mission abort policy for the swarm executing missions and its solution method by tailored deep reinforcement learning," Reliability Engineering and System Safety, Elsevier, vol. 234(C).
    8. Liu, Yu & Chen, Yiming & Jiang, Tao, 2020. "Dynamic selective maintenance optimization for multi-state systems over a finite horizon: A deep reinforcement learning approach," European Journal of Operational Research, Elsevier, vol. 283(1), pages 166-181.
    9. Yang, Hongbing & Li, Wenchao & Wang, Bin, 2021. "Joint optimization of preventive maintenance and production scheduling for multi-state production systems based on reinforcement learning," Reliability Engineering and System Safety, Elsevier, vol. 214(C).
    10. Xu, Zhaoyi & Saleh, Joseph Homer, 2021. "Machine learning for reliability engineering and safety applications: Review of current status and future opportunities," Reliability Engineering and System Safety, Elsevier, vol. 211(C).
    11. da Costa, Paulo & Verleijsdonk, Peter & Voorberg, Simon & Akcay, Alp & Kapodistria, Stella & van Jaarsveld, Willem & Zhang, Yingqian, 2023. "Policies for the dynamic traveling maintainer problem with alerts," European Journal of Operational Research, Elsevier, vol. 305(3), pages 1141-1152.
    12. Tseremoglou, Iordanis & Santos, Bruno F., 2024. "Condition-Based Maintenance scheduling of an aircraft fleet under partial observability: A Deep Reinforcement Learning approach," Reliability Engineering and System Safety, Elsevier, vol. 241(C).
    13. Pinciroli, Luca & Baraldi, Piero & Compare, Michele & Zio, Enrico, 2023. "Optimal operation and maintenance of energy storage systems in grid-connected microgrids by deep reinforcement learning," Applied Energy, Elsevier, vol. 352(C).
    14. Guan, Xiaoshu & Xiang, Zhengliang & Bao, Yuequan & Li, Hui, 2022. "Structural dominant failure modes searching method based on deep reinforcement learning," Reliability Engineering and System Safety, Elsevier, vol. 219(C).
    15. Xie, Haipeng & Tang, Lingfeng & Zhu, Hao & Cheng, Xiaofeng & Bie, Zhaohong, 2023. "Robustness assessment and enhancement of deep reinforcement learning-enabled load restoration for distribution systems," Reliability Engineering and System Safety, Elsevier, vol. 237(C).
    16. Saleh, Ali & Remenyte-Prescott, Rasa & Prescott, Darren & Chiachío, Manuel, 2024. "Intelligent and adaptive asset management model for railway sections using the iPN method," Reliability Engineering and System Safety, Elsevier, vol. 241(C).
    17. Yan, Dongyang & Li, Keping & Zhu, Qiaozhen & Liu, Yanyan, 2023. "A railway accident prevention method based on reinforcement learning – Active preventive strategy by multi-modal data," Reliability Engineering and System Safety, Elsevier, vol. 234(C).
    18. Lee, Juseong & Mitici, Mihaela, 2023. "Deep reinforcement learning for predictive aircraft maintenance using probabilistic Remaining-Useful-Life prognostics," Reliability Engineering and System Safety, Elsevier, vol. 230(C).
    19. Ye, Zhenggeng & Cai, Zhiqiang & Yang, Hui & Si, Shubin & Zhou, Fuli, 2023. "Joint optimization of maintenance and quality inspection for manufacturing networks based on deep reinforcement learning," Reliability Engineering and System Safety, Elsevier, vol. 236(C).
    20. Caputo, Cesare & Cardin, Michel-Alexandre & Ge, Pudong & Teng, Fei & Korre, Anna & Antonio del Rio Chanona, Ehecatl, 2023. "Design and planning of flexible mobile Micro-Grids using Deep Reinforcement Learning," Applied Energy, Elsevier, vol. 335(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:reensy:v:241:y:2024:i:c:s0951832023006233. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: https://www.journals.elsevier.com/reliability-engineering-and-system-safety .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.