IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2602.15182.html

Autodeleveraging as Online Learning

Author

Listed:
  • Tarun Chitra
  • Nagu Thogiti
  • Mauricio Jean Pieer Trujillo Ramirez
  • Victor Xu

Abstract

Autodeleveraging (ADL) is a last-resort loss socialization mechanism used by perpetual futures venues when liquidation and insurance buffers are insufficient to restore solvency. Despite the scale of perpetual futures markets, ADL has received limited formal treatment as a sequential control problem. This paper provides a concise formalization of ADL as online learning on a PNL-haircut domain: at each round, the venue selects a solvency budget and a set of profitable trader accounts. The profitable accounts are liquidated to cover shortfalls up to the solvency budget, with the aim of recovering exchange-wide solvency. In this model, ADL haircuts apply to positive PNL (unrealized gains), not to posted collateral principal. Using our online learning model, we provide robustness results and theoretical upper bounds on how poorly a mechanism can perform at recovering solvency. We apply our model to the October 10, 2025 Hyperliquid stress episode. The regret caused by Hyperliquid's production ADL queue is about 50\% of an upper bound on regret, calibrated to this event, while our optimized algorithm achieves about 2.6\% of the same bound. In dollar terms, the production ADL model over liquidates trader profits by up to \$51.7M. We also counterfactually evaluated algorithms inspired by our online learning framework that perform better and found that the best algorithm reduces overshoot to \$3M. Our results provide simple, implementable mechanisms for improving ADL in live perpetuals exchanges.

Suggested Citation

  • Tarun Chitra & Nagu Thogiti & Mauricio Jean Pieer Trujillo Ramirez & Victor Xu, 2026. "Autodeleveraging as Online Learning," Papers 2602.15182, arXiv.org.
  • Handle: RePEc:arx:papers:2602.15182
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2602.15182
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Omar Besbes & Yonatan Gur & Assaf Zeevi, 2015. "Non-Stationary Stochastic Optimization," Operations Research, INFORMS, vol. 63(5), pages 1227-1244, October.
    2. Tarun Chitra, 2025. "Autodeleveraging: Impossibilities and Optimization," Papers 2512.01112, arXiv.org, revised Feb 2026.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Liang, Jinpeng & Wu, Jianjun & Gao, Ziyou & Sun, Huijun & Yang, Xin & Lo, Hong K., 2019. "Bus transit network design with uncertainties on the basis of a metro network: A two-step model framework," Transportation Research Part B: Methodological, Elsevier, vol. 126(C), pages 115-138.
    2. Hikima, Yuya & Takeda, Akiko, 2025. "Stochastic approach for price optimization problems with decision-dependent uncertainty," European Journal of Operational Research, Elsevier, vol. 322(2), pages 541-553.
    3. Boxiao Chen, 2021. "Data‐Driven Inventory Control with Shifting Demand," Production and Operations Management, Production and Operations Management Society, vol. 30(5), pages 1365-1385, May.
    4. Liam Madden & Stephen Becker & Emiliano Dall’Anese, 2021. "Bounds for the Tracking Error of First-Order Online Optimization Methods," Journal of Optimization Theory and Applications, Springer, vol. 189(2), pages 437-457, May.
    5. Lukasz Sliwinski & Tanut Treetanthiploet & David Siska & Lukasz Szpruch, 2025. "Competitive Pricing Using Model-Based Bandits," Computational Economics, Springer;Society for Computational Economics, vol. 66(6), pages 4813-4867, December.
    6. Santiago R. Balseiro & Yonatan Gur, 2019. "Learning in Repeated Auctions with Budgets: Regret Minimization and Equilibrium," Management Science, INFORMS, vol. 65(9), pages 3952-3968, September.
    7. Quanquan Liu & Yining Wang, 2025. "Technical Note: Maximum Likelihood Optimization via Parallel Estimating Gradient Ascent," Computational Economics, Springer;Society for Computational Economics, vol. 66(6), pages 4621-4643, December.
    8. N. Bora Keskin & Assaf Zeevi, 2017. "Chasing Demand: Learning and Earning in a Changing Environment," Mathematics of Operations Research, INFORMS, vol. 42(2), pages 277-307, May.
    9. Tarun Chitra, 2025. "A Curationary Tale: Logarithmic Regret in DeFi Lending via Dynamic Pricing," Papers 2503.18237, arXiv.org.
    10. Yang, Xiangyu & Zhang, Jianghua & Hu, Jian-Qiang & Hu, Jiaqiao, 2024. "Nonparametric multi-product dynamic pricing with demand learning via simultaneous price perturbation," European Journal of Operational Research, Elsevier, vol. 319(1), pages 191-205.
    11. Kuang Xu & Se-Young Yun, 2020. "Reinforcement with Fading Memories," Mathematics of Operations Research, INFORMS, vol. 45(4), pages 1258-1288, November.
    12. Ludovico Crippa & Yonatan Gur & Bar Light, 2025. "Equilibria under Dynamic Benchmark Consistency in Non-Stationary Multi-Agent Systems," Papers 2501.11897, arXiv.org, revised May 2025.
    13. Xi Chen & Yining Wang & Yu-Xiang Wang, 2019. "Technical Note—Nonstationary Stochastic Optimization Under L p,q -Variation Measures," Operations Research, INFORMS, vol. 67(6), pages 1752-1765, November.
    14. Ludovico Crippa & Yonatan Gur & Bar Light, 2022. "Equilibria in Repeated Games under No-Regret with Dynamic Benchmarks," Papers 2212.03152, arXiv.org, revised Jan 2025.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2602.15182. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.