IDEAS home Printed from https://ideas.repec.org/a/inm/ormsom/v24y2022i1p285-304.html
   My bibliography  Save this article

A Deep Q-Network for the Beer Game: Deep Reinforcement Learning for Inventory Optimization

Author

Listed:
  • Afshin Oroojlooyjadid

    (Department of Industrial and Systems Engineering, Lehigh University, Bethlehem, Pennsylvania 18015)

  • MohammadReza Nazari

    (Department of Industrial and Systems Engineering, Lehigh University, Bethlehem, Pennsylvania 18015)

  • Lawrence V. Snyder

    (Department of Industrial and Systems Engineering, Lehigh University, Bethlehem, Pennsylvania 18015)

  • Martin Takáč

    (Department of Industrial and Systems Engineering, Lehigh University, Bethlehem, Pennsylvania 18015)

Abstract

Problem definition : The beer game is widely used in supply chain management classes to demonstrate the bullwhip effect and the importance of supply chain coordination. The game is a decentralized, multiagent, cooperative problem that can be modeled as a serial supply chain network in which agents choose order quantities while cooperatively attempting to minimize the network’s total cost, although each agent only observes local information. Academic/practical relevance : Under some conditions, a base-stock replenishment policy is optimal. However, in a decentralized supply chain in which some agents act irrationally, there is no known optimal policy for an agent wishing to act optimally. Methodology : We propose a deep reinforcement learning (RL) algorithm to play the beer game. Our algorithm makes no assumptions about costs or other settings. As with any deep RL algorithm, training is computationally intensive, but once trained, the algorithm executes in real time. We propose a transfer-learning approach so that training performed for one agent can be adapted quickly for other agents and settings. Results : When playing with teammates who follow a base-stock policy, our algorithm obtains near-optimal order quantities. More important, it performs significantly better than a base-stock policy when other agents use a more realistic model of human ordering behavior. We observe similar results using a real-world data set. Sensitivity analysis shows that a trained model is robust to changes in the cost coefficients. Finally, applying transfer learning reduces the training time by one order of magnitude. Managerial implications : This paper shows how artificial intelligence can be applied to inventory optimization. Our approach can be extended to other supply chain optimization problems, especially those in which supply chain partners act in irrational or unpredictable ways. Our RL agent has been integrated into a new online beer game, which has been played more than 17,000 times by more than 4,000 people.

Suggested Citation

  • Afshin Oroojlooyjadid & MohammadReza Nazari & Lawrence V. Snyder & Martin Takáč, 2022. "A Deep Q-Network for the Beer Game: Deep Reinforcement Learning for Inventory Optimization," Manufacturing & Service Operations Management, INFORMS, vol. 24(1), pages 285-304, January.
  • Handle: RePEc:inm:ormsom:v:24:y:2022:i:1:p:285-304
    DOI: 10.1287/msom.2020.0939
    as

    Download full text from publisher

    File URL: http://dx.doi.org/10.1287/msom.2020.0939
    Download Restriction: no

    File URL: https://libkey.io/10.1287/msom.2020.0939?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:inm:ormsom:v:24:y:2022:i:1:p:285-304. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Chris Asher (email available below). General contact details of provider: https://edirc.repec.org/data/inforea.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.