IDEAS home Printed from https://ideas.repec.org/a/spr/joinma/v30y2019i1d10.1007_s10845-016-1237-7.html
   My bibliography  Save this article

Optimal preventive maintenance policy based on reinforcement learning of a fleet of military trucks

Author

Listed:
  • Stephane R. A. Barde

    (Korea Advanced Institute of Science and Technology (KAIST))

  • Soumaya Yacout

    (Ecole Polytechnique de Montreal)

  • Hayong Shin

    (Korea Advanced Institute of Science and Technology (KAIST))

Abstract

In this paper, we model preventive maintenance strategies for equipment composed of multi-non-identical components which have different time-to-failure probability distribution, by using a Markov decision process (MDP). The originality of this paper resides in the fact that a Monte Carlo reinforcement learning (MCRL) approach is used to find the optimal policy for each different strategy. The approach is applied to an already existing published application which deals with a fleet of military trucks. The fleet consists of a group of similar trucks that are composed of non-identical components. The problem is formulated as a MDP and solved by a MCRL technique. The advantage of this modeling technique when compared to the published one is that there is no need to estimate the main parameters of the model, for example the estimation of the transition probabilities. These parameters are treated as variables and they are found by the modeling technique, while searching for the optimal solution. Moreover, the technique is not bounded by any explicit mathematical formula, and it converges to the optimal solution whereas the previous model optimizes the replacement policy of each component separately, which leads to a local optimization. The results show that by using the reinforcement learning approach, we are able of getting a 36.44 % better solution that is less downtime.

Suggested Citation

  • Stephane R. A. Barde & Soumaya Yacout & Hayong Shin, 2019. "Optimal preventive maintenance policy based on reinforcement learning of a fleet of military trucks," Journal of Intelligent Manufacturing, Springer, vol. 30(1), pages 147-161, January.
  • Handle: RePEc:spr:joinma:v:30:y:2019:i:1:d:10.1007_s10845-016-1237-7
    DOI: 10.1007/s10845-016-1237-7
    as

    Download full text from publisher

    File URL: http://link.springer.com/10.1007/s10845-016-1237-7
    File Function: Abstract
    Download Restriction: Access to the full text of the articles in this series is restricted.

    File URL: https://libkey.io/10.1007/s10845-016-1237-7?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Gosavi, Abhijit, 2004. "Reinforcement learning for long-run average cost," European Journal of Operational Research, Elsevier, vol. 155(3), pages 654-674, June.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Barlow, E. & Bedford, T. & Revie, M. & Tan, J. & Walls, L., 2021. "A performance-centred approach to optimising maintenance of complex systems," European Journal of Operational Research, Elsevier, vol. 292(2), pages 579-595.
    2. Wu, Tianyi & Yang, Li & Ma, Xiaobing & Zhang, Zihan & Zhao, Yu, 2020. "Dynamic maintenance strategy with iteratively updated group information," Reliability Engineering and System Safety, Elsevier, vol. 197(C).
    3. A. Khatab & C. Diallo & E.-H. Aghezzaf & U. Venkatadri, 2022. "Optimization of the integrated fleet-level imperfect selective maintenance and repairpersons assignment problem," Journal of Intelligent Manufacturing, Springer, vol. 33(3), pages 703-718, March.
    4. Michele Compare & Luca Bellani & Enrico Cobelli & Enrico Zio & Francesco Annunziata & Fausto Carlevaro & Marzia Sepe, 2020. "A reinforcement learning approach to optimal part flow management for gas turbine maintenance," Journal of Risk and Reliability, , vol. 234(1), pages 52-62, February.
    5. Jorge Ribeiro & Pedro Andrade & Manuel Carvalho & Catarina Silva & Bernardete Ribeiro & Licínio Roque, 2022. "Playful Probes for Design Interaction with Machine Learning: A Tool for Aircraft Condition-Based Maintenance Planning and Visualisation," Mathematics, MDPI, vol. 10(9), pages 1-20, May.
    6. Ashish Kumar & Roussos Dimitrakopoulos & Marco Maulen, 2020. "Adaptive self-learning mechanisms for updating short-term production decisions in an industrial mining complex," Journal of Intelligent Manufacturing, Springer, vol. 31(7), pages 1795-1811, October.
    7. Correa-Jullian, Camila & López Droguett, Enrique & Cardemil, José Miguel, 2020. "Operation scheduling in a solar thermal system: A reinforcement learning-based framework," Applied Energy, Elsevier, vol. 268(C).

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Duraikannan Sundaramoorthi & Victoria Chen & Jay Rosenberger & Seoung Kim & Deborah Buckley-Behan, 2010. "A data-integrated simulation-based optimization for assigning nurses to patient admissions," Health Care Management Science, Springer, vol. 13(3), pages 210-221, September.
    2. Yang, Hongbing & Li, Wenchao & Wang, Bin, 2021. "Joint optimization of preventive maintenance and production scheduling for multi-state production systems based on reinforcement learning," Reliability Engineering and System Safety, Elsevier, vol. 214(C).
    3. Li, Xueping & Wang, Jiao & Sawhney, Rapinder, 2012. "Reinforcement learning for joint pricing, lead-time and scheduling decisions in make-to-order systems," European Journal of Operational Research, Elsevier, vol. 221(1), pages 99-109.
    4. Singh, Sumeetpal S. & Tadic, Vladislav B. & Doucet, Arnaud, 2007. "A policy gradient method for semi-Markov decision processes with application to call admission control," European Journal of Operational Research, Elsevier, vol. 178(3), pages 808-818, May.
    5. Peter Seele & Claus Dierksmeier & Reto Hofstetter & Mario D. Schultz, 2021. "Mapping the Ethicality of Algorithmic Pricing: A Review of Dynamic and Personalized Pricing," Journal of Business Ethics, Springer, vol. 170(4), pages 697-719, May.
    6. Barlow, E. & Bedford, T. & Revie, M. & Tan, J. & Walls, L., 2021. "A performance-centred approach to optimising maintenance of complex systems," European Journal of Operational Research, Elsevier, vol. 292(2), pages 579-595.
    7. Schütz, Hans-Jörg & Kolisch, Rainer, 2012. "Approximate dynamic programming for capacity allocation in the service industry," European Journal of Operational Research, Elsevier, vol. 218(1), pages 239-250.
    8. Safaei, Fatemeh & Ahmadi, Jafar & Taghipour, Sharareh, 2022. "A maintenance policy for a k-out-of-n system under enhancing the system’s operating time and safety constraints, and selling the second-hand components," Reliability Engineering and System Safety, Elsevier, vol. 218(PA).
    9. Jian Wang & Murtaza Das & Stephen Tappert, 2021. "Applying reinforcement learning to estimating apartment reference rents," Journal of Revenue and Pricing Management, Palgrave Macmillan, vol. 20(3), pages 330-343, June.
    10. van Wezel, M.C. & van Eck, N.J.P., 2005. "Reinforcement learning and its application to Othello," Econometric Institute Research Papers EI 2005-47, Erasmus University Rotterdam, Erasmus School of Economics (ESE), Econometric Institute.
    11. Xiaonong Lu & Baoqun Yin & Haipeng Zhang, 2016. "A reinforcement-learning approach for admission control in distributed network service systems," Journal of Combinatorial Optimization, Springer, vol. 31(3), pages 1241-1268, April.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:joinma:v:30:y:2019:i:1:d:10.1007_s10845-016-1237-7. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.