IDEAS home Printed from https://ideas.repec.org/a/inm/ormnsc/v72y2026i2p1007-1024.html

Last-Iterate Convergence in No-Regret Learning: Games with Reference Effects Under Logit Demand

Author

Listed:
  • Mengzi Amy Guo

    (Department of Industrial Engineering and Operations Research, University of California, Berkeley, Berkeley, California 94720)

  • Donghao Ying

    (Department of Industrial Engineering and Operations Research, University of California, Berkeley, Berkeley, California 94720)

  • Javad Lavaei

    (Department of Industrial Engineering and Operations Research, University of California, Berkeley, Berkeley, California 94720)

  • Zuo-Jun Max Shen

    (Department of Industrial Engineering and Operations Research, University of California, Berkeley, Berkeley, California 94720; and Faculty of Engineering and Faculty of Business and Economics, University of Hong Kong, Hong Kong, China)

Abstract

This work examines the behaviors of the online projected gradient ascent ( OPGA ) algorithm and its variant in a repeated oligopoly price competition under reference effects. In particular, we consider that multiple firms engage in a multiperiod price competition, where consecutive periods are linked by the reference price update and each firm has access only to its own first-order feedback. Consumers assess their willingness to pay by comparing the current price against the memory-based reference price, and their choices follow the multinomial logit (MNL) model. We use the notion of stationary Nash equilibrium (SNE), defined as the fixed point of the equilibrium pricing policy, to simultaneously capture the long-run equilibrium and stability. We first study the loss-neutral reference effects and show that if the firms employ the OPGA algorithm—adjusting the price using the first-order derivatives of their log-revenues—the price and reference price paths attain last-iterate convergence to the unique SNE, thereby guaranteeing the no-regret learning and market stability. Moreover, with appropriate step-sizes, we prove that this algorithm exhibits a convergence rate of O ˜ ( 1 / t 2 ) in terms of the squared distance and achieves a constant dynamic regret. Despite the simplicity of the algorithm, its convergence analysis is challenging due to the model lacking typical properties such as strong monotonicity and variational stability that are ordinarily used for the convergence analysis of online games. The inherent asymmetry nature of reference effects motivates the exploration beyond loss-neutrality. When loss-averse reference effects are introduced, we propose a variant of the original algorithm named the conservative- OPGA ( C-OPGA ) to handle the nonsmooth revenue functions and show that the price and reference price achieve last-iterate convergence to the set of SNEs with the rate of O ( 1 / t ) . Finally, we demonstrate the practicality and robustness of OPGA and C-OPGA by theoretically showing that these algorithms can also adapt to firm-differentiated step-sizes and inexact gradients.

Suggested Citation

  • Mengzi Amy Guo & Donghao Ying & Javad Lavaei & Zuo-Jun Max Shen, 2026. "Last-Iterate Convergence in No-Regret Learning: Games with Reference Effects Under Logit Demand," Management Science, INFORMS, vol. 72(2), pages 1007-1024, February.
  • Handle: RePEc:inm:ormnsc:v:72:y:2026:i:2:p:1007-1024
    DOI: 10.1287/mnsc.2023.03464
    as

    Download full text from publisher

    File URL: http://dx.doi.org/10.1287/mnsc.2023.03464
    Download Restriction: no

    File URL: https://libkey.io/10.1287/mnsc.2023.03464?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    More about this item

    Keywords

    ;
    ;
    ;
    ;

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:inm:ormnsc:v:72:y:2026:i:2:p:1007-1024. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Chris Asher (email available below). General contact details of provider: https://edirc.repec.org/data/inforea.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.