IDEAS home Printed from https://ideas.repec.org/a/inm/oropre/v72y2024i1p222-236.html
   My bibliography  Save this article

Is Q-Learning Minimax Optimal? A Tight Sample Complexity Analysis

Author

Listed:
  • Gen Li

    (Department of Statistics and Data Science, Wharton School, University of Pennsylvania, Philadelphia, Pennsylvania 19104)

  • Changxiao Cai

    (Department of Biostatistics, University of Pennsylvania, Philadelphia, Pennsylvania 19104)

  • Yuxin Chen

    (Department of Statistics and Data Science, Wharton School, University of Pennsylvania, Philadelphia, Pennsylvania 19104)

  • Yuting Wei

    (Department of Statistics and Data Science, Wharton School, University of Pennsylvania, Philadelphia, Pennsylvania 19104)

  • Yuejie Chi

    (Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213)

Abstract

Q-learning, which seeks to learn the optimal Q-function of a Markov decision process (MDP) in a model-free fashion, lies at the heart of reinforcement learning. When it comes to the synchronous setting (such that independent samples for all state–action pairs are drawn from a generative model in each iteration), substantial progress has been made toward understanding the sample efficiency of Q-learning. Consider a γ -discounted infinite-horizon MDP with state space S and action space A : to yield an entry-wise ε -approximation of the optimal Q-function, state-of-the-art theory for Q-learning requires a sample size exceeding the order of | S ‖ A | ( 1 − γ ) 5 ε 2 , which fails to match existing minimax lower bounds. This gives rise to natural questions: What is the sharp sample complexity of Q-learning? Is Q-learning provably suboptimal? This paper addresses these questions for the synchronous setting: (1) when the action space contains a single action (so that Q-learning reduces to TD learning), we prove that the sample complexity of TD learning is minimax optimal and scales as | S | ( 1 − γ ) 3 ε 2 (up to log factor); (2) when the action space contains at least two actions, we settle the sample complexity of Q-learning to be on the order of | S ‖ A | ( 1 − γ ) 4 ε 2 (up to log factor). Our theory unveils the strict suboptimality of Q-learning when the action space contains at least two actions and rigorizes the negative impact of overestimation in Q-learning. Finally, we extend our analysis to accommodate asynchronous Q-learning (i.e., the case with Markovian samples), sharpening the horizon dependency of its sample complexity to be 1 ( 1 − γ ) 4 .

Suggested Citation

  • Gen Li & Changxiao Cai & Yuxin Chen & Yuting Wei & Yuejie Chi, 2024. "Is Q-Learning Minimax Optimal? A Tight Sample Complexity Analysis," Operations Research, INFORMS, vol. 72(1), pages 222-236, January.
  • Handle: RePEc:inm:oropre:v:72:y:2024:i:1:p:222-236
    DOI: 10.1287/opre.2023.2450
    as

    Download full text from publisher

    File URL: http://dx.doi.org/10.1287/opre.2023.2450
    Download Restriction: no

    File URL: https://libkey.io/10.1287/opre.2023.2450?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:inm:oropre:v:72:y:2024:i:1:p:222-236. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Chris Asher (email available below). General contact details of provider: https://edirc.repec.org/data/inforea.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.