Advanced Search
MyIDEAS: Login

Approximate dynamic programming for capacity allocation in the service industry

Contents:

Author Info

  • Schütz, Hans-Jörg
  • Kolisch, Rainer
Registered author(s):

    Abstract

    We consider a problem where different classes of customers can book different types of service in advance and the service company has to respond immediately to the booking request confirming or rejecting it. The objective of the service company is to maximize profit made of class-type specific revenues, refunds for cancellations or no-shows as well as cost of overtime. For the calculation of the latter, information on the underlying appointment schedule is required. In contrast to most models in the literature we assume that the service time of clients is stochastic and that clients might be unpunctual. Throughout the paper we will relate the problem to capacity allocation in radiology services. The problem is modeled as a continuous-time Markov decision process and solved using simulation-based approximate dynamic programming (ADP) combined with a discrete event simulation of the service period. We employ an adapted heuristic ADP algorithm from the literature and investigate on the benefits of applying ADP to this type of problem. First, we study a simplified problem with deterministic service times and punctual arrival of clients and compare the solution from the ADP algorithm to the optimal solution. We find that the heuristic ADP algorithm performs very well in terms of objective function value, solution time, and memory requirements. Second, we study the problem with stochastic service times and unpunctuality. It is then shown that the resulting policy constitutes a large improvement over an “optimal” policy that is deduced using restrictive, simplifying assumptions.

    Download Info

    If you experience problems downloading a file, check if you have the proper application to view it first. In case of further problems read the IDEAS help page. Note that these files are not on the IDEAS site. Please be patient as the files may be large.
    File URL: http://www.sciencedirect.com/science/article/pii/S0377221711008101
    Download Restriction: Full text for ScienceDirect subscribers only

    As the access to this document is restricted, you may want to look for a different version under "Related research" (further below) or search for a different version of it.

    Bibliographic Info

    Article provided by Elsevier in its journal European Journal of Operational Research.

    Volume (Year): 218 (2012)
    Issue (Month): 1 ()
    Pages: 239-250

    as in new window
    Handle: RePEc:eee:ejores:v:218:y:2012:i:1:p:239-250

    Contact details of provider:
    Web page: http://www.elsevier.com/locate/eor

    Related research

    Keywords: Capacity allocation; Services; Health care operations; Approximate dynamic programming; Reinforcement learning; Semi-Markov decision process;

    References

    References listed on IDEAS
    Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:
    as in new window
    1. Gosavi, Abhijit, 2004. "Reinforcement learning for long-run average cost," European Journal of Operational Research, Elsevier, vol. 155(3), pages 654-674, June.
    2. Sabine Sickinger & Rainer Kolisch, 2009. "The performance of a generalized Bailey–Welch rule for outpatient appointment scheduling under inpatient and emergency demand," Health Care Management Science, Springer, vol. 12(4), pages 408-419, December.
    3. Nan Liu & Serhan Ziya & Vidyadhar G. Kulkarni, 2010. "Dynamic Scheduling of Outpatient Appointments Under Patient No-Shows and Cancellations," Manufacturing & Service Operations Management, INFORMS, vol. 12(2), pages 347-364, September.
    4. Yigal Gerchak & Diwakar Gupta & Mordechai Henig, 1996. "Reservation Planning for Elective Surgery Under Uncertain Demand for Emergency Surgery," Management Science, INFORMS, vol. 42(3), pages 321-334, March.
    5. Tapas K. Das & Abhijit Gosavi & Sridhar Mahadevan & Nicholas Marchalleck, 1999. "Solving Semi-Markov Decision Problems Using Average Reward Reinforcement Learning," Management Science, INFORMS, vol. 45(4), pages 560-574, April.
    6. Vandaele, Nico & Van Nieuwenhuyse, Inneke & Cupers, Sascha, 2003. "Optimal grouping for a nuclear magnetic resonance scanner by means of an open queueing model," European Journal of Operational Research, Elsevier, vol. 151(1), pages 181-192, November.
    7. Singh, Sumeetpal S. & Tadic, Vladislav B. & Doucet, Arnaud, 2007. "A policy gradient method for semi-Markov decision processes with application to call admission control," European Journal of Operational Research, Elsevier, vol. 178(3), pages 808-818, May.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as in new window

    Cited by:
    1. Gartner, Daniel & Kolisch, Rainer, 2014. "Scheduling the hospital-wide flow of elective patients," European Journal of Operational Research, Elsevier, vol. 233(3), pages 689-699.
    2. Geng, Na & Xie, Xiaolan & Jiang, Zhibin, 2013. "Implementation strategies of a contract-based MRI examination reservation process for stroke patients," European Journal of Operational Research, Elsevier, vol. 231(2), pages 371-380.
    3. De Vuyst, Stijn & Bruneel, Herwig & Fiems, Dieter, 2014. "Computationally efficient evaluation of appointment schedules in health care," European Journal of Operational Research, Elsevier, vol. 237(3), pages 1142-1154.
    4. Sauré, Antoine & Patrick, Jonathan & Tyldesley, Scott & Puterman, Martin L., 2012. "Dynamic multi-appointment patient scheduling for radiation therapy," European Journal of Operational Research, Elsevier, vol. 223(2), pages 573-584.

    Lists

    This item is not listed on Wikipedia, on a reading list or among the top items on IDEAS.

    Statistics

    Access and download statistics

    Corrections

    When requesting a correction, please mention this item's handle: RePEc:eee:ejores:v:218:y:2012:i:1:p:239-250. See general information about how to correct material in RePEc.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Zhang, Lei).

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If references are entirely missing, you can add them using this form.

    If the full references list an item that is present in RePEc, but the system did not link to it, you can help with this form.

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your profile, as there may be some citations waiting for confirmation.

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.