IDEAS home Printed from https://ideas.repec.org/a/eee/matcom/v241y2026ipap430-448.html

Robust policy iteration for the continuous-time stochastic H∞ control problem with unknown dynamics

Author

Listed:
  • Sun, Zhongshi
  • Jia, Guangyan

Abstract

In this article, we study a continuous-time stochastic H∞ control problem using reinforcement learning (RL) techniques, which can be formulated as solving a stochastic linear-quadratic two-person zero-sum differential game (LQZSG). First, we propose a PI-based RL algorithm that iteratively solves the stochastic game algebraic Riccati equation using collected state and control data, with all system dynamic information unknown. Notably, the algorithm requires data collection only once during the iteration process. We then provide a convergence proof of the RL algorithm and analyze the robustness of the inner and outer loops of the PI algorithm, demonstrating that when the iteration error is within a certain range, the algorithm converges to a small neighborhood of the saddle point of the stochastic LQZSG problem. Finally, we validate the effectiveness of the proposed RL algorithm through two simulation examples.

Suggested Citation

  • Sun, Zhongshi & Jia, Guangyan, 2026. "Robust policy iteration for the continuous-time stochastic H∞ control problem with unknown dynamics," Mathematics and Computers in Simulation (MATCOM), Elsevier, vol. 241(PA), pages 430-448.
  • Handle: RePEc:eee:matcom:v:241:y:2026:i:pa:p:430-448
    DOI: 10.1016/j.matcom.2025.09.009
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0378475425003817
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.matcom.2025.09.009?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to

    for a different version of it.

    References listed on IDEAS

    as
    1. Sun, Zhongshi & Jia, Guangyan, 2023. "Reinforcement learning for exploratory linear-quadratic two-person zero-sum stochastic differential games," Applied Mathematics and Computation, Elsevier, vol. 442(C).
    2. Xin Guo & Renyuan Xu & Thaleia Zariphopoulou, 2022. "Entropy Regularization for Mean Field Games with Learning," Mathematics of Operations Research, INFORMS, vol. 47(4), pages 3239-3260, November.
    3. Yanwei Jia & Xun Yu Zhou, 2021. "Policy Gradient and Actor-Critic Learning in Continuous Time and Space: Theory and Algorithms," Papers 2111.11232, arXiv.org, revised Jul 2022.
    4. Liu, Xikui & Ge, Yingying & Li, Yan, 2019. "Stackelberg games for model-free continuous-time stochastic systems based on adaptive dynamic programming," Applied Mathematics and Computation, Elsevier, vol. 363(C), pages 1-1.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Tian, Xiu-Qin & Liu, Shu-Jun & Yang, Xue, 2024. "Stochastic adaptive linear quadratic nonzero-sum differential games," Applied Mathematics and Computation, Elsevier, vol. 477(C).
    2. Zhou Fang, 2023. "Continuous-Time Path-Dependent Exploratory Mean-Variance Portfolio Construction," Papers 2303.02298, arXiv.org.
    3. Wanting He & Wenyuan Li & Yunran Wei, 2025. "Periodic evaluation of defined-contribution pension fund: A dynamic risk measure approach," Papers 2508.05241, arXiv.org.
    4. Min Dai & Yuchao Dong & Yanwei Jia & Xun Yu Zhou, 2026. "Merton's Problem with Recursive Perturbed Utility," Papers 2602.13544, arXiv.org.
    5. Yilie Huang & Yanwei Jia & Xun Yu Zhou, 2024. "Mean--Variance Portfolio Selection by Continuous-Time Reinforcement Learning: Algorithms, Regret Analysis, and Empirical Study," Papers 2412.16175, arXiv.org, revised Mar 2026.
    6. Sun, Zhongshi & Jia, Guangyan, 2023. "Reinforcement learning for exploratory linear-quadratic two-person zero-sum stochastic differential games," Applied Mathematics and Computation, Elsevier, vol. 442(C).
    7. Wu, Bo & Li, Lingfei, 2024. "Reinforcement learning for continuous-time mean-variance portfolio selection in a regime-switching market," Journal of Economic Dynamics and Control, Elsevier, vol. 158(C).
    8. Huy Chau & Duy Nguyen & Thai Nguyen, 2024. "Continuous-time optimal investment with portfolio constraints: a reinforcement learning approach," Papers 2412.10692, arXiv.org.
    9. Mononen, Lasse, 2025. "On Preference for Simplicity and Probability Weighting," Center for Mathematical Economics Working Papers 748, Center for Mathematical Economics, Bielefeld University.
    10. Kerimkulov, Bekzhan & Šiška, David & Szpruch, Łukasz & Zhang, Yufei, 2025. "Mirror descent for stochastic control problems with measure-valued controls," Stochastic Processes and their Applications, Elsevier, vol. 190(C).
    11. Liu, Chong & Zhang, Huaguang & Luo, Yanhong & Zhang, Kun, 2021. "Echo state network-based online optimal control for discrete-time nonlinear systems," Applied Mathematics and Computation, Elsevier, vol. 409(C).
    12. Dong, Xu & Zhang, Huaguang & Ming, Zhongyang & Luo, Yanhong, 2025. "Optimal finite-horizon tracking control in affine nonlinear systems: A Stackelberg game approach with H2/H∞ framework," Applied Mathematics and Computation, Elsevier, vol. 495(C).
    13. Yanwei Jia & Xun Yu Zhou, 2022. "q-Learning in Continuous Time," Papers 2207.00713, arXiv.org, revised May 2025.
    14. Cardo-Miota, Javier & Khadem, Shafi & Bahloul, Mohamed, 2025. "Deep reinforcement learning based electricity bill minimization strategy for residential prosumer," Mathematics and Computers in Simulation (MATCOM), Elsevier, vol. 238(C), pages 296-305.
    15. Xiangyu Cui & Xun Li & Yun Shi & Si Zhao, 2023. "Discrete-Time Mean-Variance Strategy Based on Reinforcement Learning," Papers 2312.15385, arXiv.org.
    16. Yanwei Jia, 2024. "Continuous-time Risk-sensitive Reinforcement Learning via Quadratic Variation Penalty," Papers 2404.12598, arXiv.org, revised Mar 2026.
    17. Dianetti, Jodi & Dumitrescu, Roxana & Ferrari, Giorgio & Xu, Renyuan, 2025. "Entropy Regularization in Mean-Field Games of Optimal Stopping," Center for Mathematical Economics Working Papers 755, Center for Mathematical Economics, Bielefeld University.
    18. Zhou Fang & Haiqing Xu, 2023. "Over-the-Counter Market Making via Reinforcement Learning," Papers 2307.01816, arXiv.org.
    19. Min Dai & Yu Sun & Zuo Quan Xu & Xun Yu Zhou, 2024. "Learning to Optimally Stop Diffusion Processes, with Financial Applications," Papers 2408.09242, arXiv.org, revised Aug 2025.
    20. Dianetti, Jodi & Ferrari, Giorgio & Xu, Renyuan, 2025. "Exploratory Optimal Stopping: A Singular Control Formulation," Center for Mathematical Economics Working Papers 740, Center for Mathematical Economics, Bielefeld University.

    More about this item

    Keywords

    ;
    ;
    ;
    ;
    ;

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:matcom:v:241:y:2026:i:pa:p:430-448. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.journals.elsevier.com/mathematics-and-computers-in-simulation/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.