Author
Listed:
- Fadaki, Masih
- Ansari, Sina
- Abareshi, Ahmad
- Lee, Paul Tae-Woo
Abstract
In humanitarian and nonprofit operations, distributing aid such as food, shelter, and medical supplies becomes challenging in an online setting, where the future demand is unknown, since allocation decisions are made in real time as uncertainties unfold. Being overly conservative in allocating items at the beginning of the supply chain to save stock for fulfilling demand further down the supply chain increases the likelihood of unallocated items (waste). On the other hand, fully addressing the demand of nodes in the earlier stages of the supply chain may negatively impact the equity of the allocation policy, as downstream nodes may receive significantly fewer items in proportion to their demand. This study proposes a framework for modeling the sequential decisions involved in this online resource allocation problem as a Markov Decision Process (MDP). Given that the size of the state–action space can become very large for this problem, standard dynamic programming methods in the reinforcement learning domain reach their limits, so using Approximate Dynamic Programming (ADP) is a practical solution. In this study, two methods of measuring downstream uncertainty are proposed, and Policy Function Approximation (PFA) is used to develop an optimal allocation policy. Numerical results and the application of the proposed model to the Food Bank of Southern Tier in New York suggest a reasonable balance between maximizing efficiency (minimizing the waste of unallocated items) and ensuring an equitable allocation.
Suggested Citation
Fadaki, Masih & Ansari, Sina & Abareshi, Ahmad & Lee, Paul Tae-Woo, 2025.
"Sequential resource allocation for humanitarian operations using approximate dynamic programming,"
Transportation Research Part E: Logistics and Transportation Review, Elsevier, vol. 201(C).
Handle:
RePEc:eee:transe:v:201:y:2025:i:c:s1366554525002546
DOI: 10.1016/j.tre.2025.104213
Download full text from publisher
As the access to this document is restricted, you may want to
for a different version of it.
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:transe:v:201:y:2025:i:c:s1366554525002546. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/600244/description#description .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.