IDEAS home Printed from https://ideas.repec.org/h/spr/spbchp/978-3-031-11814-2_5.html
   My bibliography  Save this book chapter

Multi-Echelon Inventory Optimization Using Deep Reinforcement Learning

In: Quantitative Models in Life Science Business

Author

Listed:
  • Patric Hammler

    (Universität Bern)

  • Nicolas Riesterer

    (F. Hoffmann-La Roche AG)

  • Gang Mu

    (University of Zurich)

  • Torsten Braun

    (Universität Bern)

Abstract

In this chapter, we provide an overview of inventory management within the pharmaceutical industry and how to model and optimize it. Inventory management is a highly relevant topic, as it causes high costs such as holding, shortage, and reordering costs. Especially the event of a stock-out can cause damage that goes beyond monetary damage in the form of lost sales. To minimize those costs is the task of an optimized reorder policy. A reorder policy is optimal when it minimizes the accumulated cost in every situation. However, finding an optimal policy is not trivial. First, the problem is highly stochastic as we need to consider variable demands and lead times. Second, the supply chain consists of several warehouses incl. the factory, global distribution warehouses, and local affiliate warehouses, whereby the reorder policy of each warehouse has an impact on the optimal reorder policy of related warehouses. In this context, we discuss the concept of multi-echelon inventory optimization and a methodology that is capable of capturing both, the stochastic behavior of the environment and how it is impacted by the reorder policy: Markov decision processes (MDPs). On this basis, we introduce the concept, its related benefits and weaknesses of a methodology named Reinforcement Learning (RL). RL is capable of finding (near-) optimal (reorder) policies for MDPs. Furthermore, some simulation-based results and current research directions are presented.

Suggested Citation

  • Patric Hammler & Nicolas Riesterer & Gang Mu & Torsten Braun, 2023. "Multi-Echelon Inventory Optimization Using Deep Reinforcement Learning," SpringerBriefs in Economics, in: Jung Kyu Canci & Philipp Mekler & Gang Mu (ed.), Quantitative Models in Life Science Business, pages 73-93, Springer.
  • Handle: RePEc:spr:spbchp:978-3-031-11814-2_5
    DOI: 10.1007/978-3-031-11814-2_5
    as

    Download full text from publisher

    To our knowledge, this item is not available for download. To find whether it is available, there are three options:
    1. Check below whether another version of this item is available online.
    2. Check on the provider's web page whether it is in fact available.
    3. Perform a search for a similarly titled item that would be available.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:spbchp:978-3-031-11814-2_5. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.