IDEAS home Printed from https://ideas.repec.org/a/inm/oropre/v73y2025i3p1675-1702.html
   My bibliography  Save this article

Adaptive, Doubly Optimal No-Regret Learning in Strongly Monotone and Exp-Concave Games with Gradient Feedback

Author

Listed:
  • Michael Jordan

    (Department of Electrical Engineering and Computer Science and Department of Statistics, University of California, Berkeley, Berkeley, California 94720)

  • Tianyi Lin

    (Department of Industrial Engineering and Operations Research, Columbia University, New York, New York 10027)

  • Zhengyuan Zhou

    (Stern School of Business, New York University, New York, New York 10012)

Abstract

Online gradient descent (OGD) is well-known to be doubly optimal under strong convexity or monotonicity assumptions: (1) in the single-agent setting, it achieves an optimal regret of Θ ( log T ) for strongly convex cost functions, and (2) in the multiagent setting of strongly monotone games with each agent employing OGD, we obtain last-iterate convergence of the joint action to a unique Nash equilibrium at an optimal rate of Θ ( 1 T ) . Whereas these finite-time guarantees highlight its merits, OGD has the drawback that it requires knowing the strong convexity/monotonicity parameters. In this paper, we design a fully adaptive OGD algorithm, AdaOGD , that does not require a priori knowledge of these parameters. In the single-agent setting, our algorithm achieves O ( log 2 ( T ) ) regret under strong convexity, which is optimal up to a log factor. Further, if each agent employs AdaOGD in strongly monotone games, the joint action converges in a last-iterate sense to a unique Nash equilibrium at a rate of O ( log 3 T T ) , again optimal up to log factors. We illustrate our algorithms in a learning version of the classic newsvendor problem, in which, because of lost sales, only (noisy) gradient feedback can be observed. Our results immediately yield the first feasible and near-optimal algorithm for both the single-retailer and multiretailer settings. We also extend our results to the more general setting of exp-concave cost functions and games, using the online Newton step algorithm.

Suggested Citation

  • Michael Jordan & Tianyi Lin & Zhengyuan Zhou, 2025. "Adaptive, Doubly Optimal No-Regret Learning in Strongly Monotone and Exp-Concave Games with Gradient Feedback," Operations Research, INFORMS, vol. 73(3), pages 1675-1702, May.
  • Handle: RePEc:inm:oropre:v:73:y:2025:i:3:p:1675-1702
    DOI: 10.1287/opre.2022.0446
    as

    Download full text from publisher

    File URL: http://dx.doi.org/10.1287/opre.2022.0446
    Download Restriction: no

    File URL: https://libkey.io/10.1287/opre.2022.0446?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:inm:oropre:v:73:y:2025:i:3:p:1675-1702. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Chris Asher (email available below). General contact details of provider: https://edirc.repec.org/data/inforea.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.