Author
Listed:
- Kaynov, Illya
- van Knippenberg, Marijn
- Menkovski, Vlado
- van Breemen, Albert
- van Jaarsveld, Willem
Abstract
The One-Warehouse Multi-Retailer (OWMR) system is the prototypical distribution and inventory system. Many OWMR variants exist, e.g. demand in excess of supply may be completely back-ordered, partially back-ordered, or lost. Prior research has focused on the study of heuristic reordering policies such as echelon base-stock levels coupled with heuristic allocation policies. Constructing well-performing policies is time-consuming and must be redone for every problem variant. By contrast, Deep Reinforcement Learning (DRL) is a general purpose technique for sequential decision making that has yielded good results for various challenging inventory systems. However, applying DRL to OWMR problems is nontrivial, since allocation involves setting a quantity for each retailer: The number of possible allocations grows exponentially in the number of retailers. Since each action is typically associated with a neural network output node, this renders standard DRL techniques intractable. Our proposed DRL algorithm instead inferences a multi-discrete action distribution which has output nodes that grow linearly in the number of retailers. Moreover, when total retailer orders exceed the available warehouse inventory, we propose a random rationing policy that substantially improves the ability of standard DRL algorithms to train good policies because it promotes the learning of feasible retailer order quantities. The resulting algorithm outperforms general-purpose benchmark policies by ∼1−3% for the lost sales case and by ∼12−20% for the partial back-ordering case. For complete back-ordering, the algorithm cannot consistently outperform the benchmark.
Suggested Citation
Kaynov, Illya & van Knippenberg, Marijn & Menkovski, Vlado & van Breemen, Albert & van Jaarsveld, Willem, 2024.
"Deep Reinforcement Learning for One-Warehouse Multi-Retailer inventory management,"
International Journal of Production Economics, Elsevier, vol. 267(C).
Handle:
RePEc:eee:proeco:v:267:y:2024:i:c:s0925527323003201
DOI: 10.1016/j.ijpe.2023.109088
Download full text from publisher
As the access to this document is restricted, you may want to search for a different version of it.
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:proeco:v:267:y:2024:i:c:s0925527323003201. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/locate/ijpe .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.