IDEAS home Printed from https://ideas.repec.org/a/inm/ormnsc/v69y2023i10p5722-5739.html
   My bibliography  Save this article

Nonstationary Reinforcement Learning: The Blessing of (More) Optimism

Author

Listed:
  • Wang Chi Cheung

    (Department of Industrial Systems Engineering and Management, National University of Singapore, 117576 Singapore)

  • David Simchi-Levi

    (Institute for Data, Systems, and Society, Department of Civil and Environmental Engineering and Operations Research Center, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139)

  • Ruihao Zhu

    (SC Johnson College of Business, Cornell University, Ithaca, New York 14853)

Abstract

Motivated by operations research applications, such as inventory control and real-time bidding, we consider undiscounted reinforcement learning in Markov decision processes under model uncertainty and temporal drifts. In this setting, both the latent reward and state transition distributions are allowed to evolve over time, as long as their respective total variations, quantified by suitable metrics, do not exceed certain variation budgets . We first develop the sliding window upper confidence bound for reinforcement learning with confidence-widening ( SWUCRL2-CW ) algorithm and establish its dynamic regret bound when the variation budgets are known. In addition, we propose the bandit-over-reinforcement learning algorithm to adaptively tune the SWUCRL2-CW algorithm to achieve the same dynamic regret bound but in a parameter-free manner (i.e., without knowing the variation budgets). Finally, we conduct numerical experiments to show that our proposed algorithms achieve superior empirical performance compared with existing algorithms. Notably, under nonstationarity, historical data samples may falsely indicate that state transition rarely happens. This thus presents a significant challenge when one tries to apply the conventional optimism in the face of uncertainty principle to achieve a low dynamic regret bound. We overcome this challenge by proposing a novel confidence-widening technique that incorporates additional optimism into our learning algorithms. To extend our theoretical findings, we demonstrate, in the context of single-item inventory control with lost sales, fixed cost, and zero lead time, how one can leverage special structures on the state transition distributions to achieve improved dynamic regret bound in time-varying demand environments.

Suggested Citation

  • Wang Chi Cheung & David Simchi-Levi & Ruihao Zhu, 2023. "Nonstationary Reinforcement Learning: The Blessing of (More) Optimism," Management Science, INFORMS, vol. 69(10), pages 5722-5739, October.
  • Handle: RePEc:inm:ormnsc:v:69:y:2023:i:10:p:5722-5739
    DOI: 10.1287/mnsc.2023.4704
    as

    Download full text from publisher

    File URL: http://dx.doi.org/10.1287/mnsc.2023.4704
    Download Restriction: no

    File URL: https://libkey.io/10.1287/mnsc.2023.4704?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:inm:ormnsc:v:69:y:2023:i:10:p:5722-5739. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Chris Asher (email available below). General contact details of provider: https://edirc.repec.org/data/inforea.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.