IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2207.00713.html
   My bibliography  Save this paper

q-Learning in Continuous Time

Author

Listed:
  • Yanwei Jia
  • Xun Yu Zhou

Abstract

We study the continuous-time counterpart of Q-learning for reinforcement learning (RL) under the entropy-regularized, exploratory diffusion process formulation introduced by Wang et al. (2020). As the conventional (big) Q-function collapses in continuous time, we consider its first-order approximation and coin the term ``(little) q-function". This function is related to the instantaneous advantage rate function as well as the Hamiltonian. We develop a ``q-learning" theory around the q-function that is independent of time discretization. Given a stochastic policy, we jointly characterize the associated q-function and value function by martingale conditions of certain stochastic processes, in both on-policy and off-policy settings. We then apply the theory to devise different actor-critic algorithms for solving underlying RL problems, depending on whether or not the density function of the Gibbs measure generated from the q-function can be computed explicitly. One of our algorithms interprets the well-known Q-learning algorithm SARSA, and another recovers a policy gradient (PG) based continuous-time algorithm proposed in Jia and Zhou (2022b). Finally, we conduct simulation experiments to compare the performance of our algorithms with those of PG-based algorithms in Jia and Zhou (2022b) and time-discretized conventional Q-learning algorithms.

Suggested Citation

  • Yanwei Jia & Xun Yu Zhou, 2022. "q-Learning in Continuous Time," Papers 2207.00713, arXiv.org, revised Apr 2023.
  • Handle: RePEc:arx:papers:2207.00713
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2207.00713
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    2. Yanwei Jia & Xun Yu Zhou, 2021. "Policy Gradient and Actor-Critic Learning in Continuous Time and Space: Theory and Algorithms," Papers 2111.11232, arXiv.org, revised Jul 2022.
    3. Yanwei Jia & Xun Yu Zhou, 2021. "Policy Evaluation and Temporal-Difference Learning in Continuous Time and Space: A Martingale Approach," Papers 2108.06655, arXiv.org, revised Feb 2022.
    4. Sun, Yeneng, 2006. "The exact law of large numbers via Fubini extension and characterization of insurable risks," Journal of Economic Theory, Elsevier, vol. 126(1), pages 31-69, January.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Min Dai & Yuchao Dong & Yanwei Jia & Xun Yu Zhou, 2023. "Learning Merton's Strategies in an Incomplete Market: Recursive Entropy Regularization and Biased Gaussian Exploration," Papers 2312.11797, arXiv.org.
    2. Xia Han & Ruodu Wang & Xun Yu Zhou, 2022. "Choquet regularization for reinforcement learning," Papers 2208.08497, arXiv.org.
    3. Xiangyu Cui & Xun Li & Yun Shi & Si Zhao, 2023. "Discrete-Time Mean-Variance Strategy Based on Reinforcement Learning," Papers 2312.15385, arXiv.org.
    4. Lijun Bo & Yijie Huang & Xiang Yu, 2023. "On optimal tracking portfolio in incomplete markets: The classical control and the reinforcement learning approaches," Papers 2311.14318, arXiv.org.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Zhou Fang, 2023. "Continuous-Time Path-Dependent Exploratory Mean-Variance Portfolio Construction," Papers 2303.02298, arXiv.org.
    2. Xiangyu Cui & Xun Li & Yun Shi & Si Zhao, 2023. "Discrete-Time Mean-Variance Strategy Based on Reinforcement Learning," Papers 2312.15385, arXiv.org.
    3. Zhou Fang & Haiqing Xu, 2023. "Over-the-Counter Market Making via Reinforcement Learning," Papers 2307.01816, arXiv.org.
    4. Zhou Fang & Haiqing Xu, 2023. "Market Making of Options via Reinforcement Learning," Papers 2307.01814, arXiv.org.
    5. Jonas Hedlund & Carlos Oyarzun, 2018. "Imitation in heterogeneous populations," Economic Theory, Springer;Society for the Advancement of Economic Theory (SAET), vol. 65(4), pages 937-973, June.
    6. Tulika Saha & Sriparna Saha & Pushpak Bhattacharyya, 2020. "Towards sentiment aided dialogue policy learning for multi-intent conversations using hierarchical reinforcement learning," PLOS ONE, Public Library of Science, vol. 15(7), pages 1-28, July.
    7. Li, Fei & Song, Yangbo & Zhao, Mofei, 2023. "Global manipulation by local obfuscation," Journal of Economic Theory, Elsevier, vol. 207(C).
    8. Al-Najjar, Nabil I., 2008. "Large games and the law of large numbers," Games and Economic Behavior, Elsevier, vol. 64(1), pages 1-34, September.
    9. Berliant, Marcus & Fujishima, Shota, 2012. "Optimal dynamic nonlinear income taxes: facing an uncertain future with a sluggish government," MPRA Paper 41947, University Library of Munich, Germany.
    10. Mahmoud Mahfouz & Angelos Filos & Cyrine Chtourou & Joshua Lockhart & Samuel Assefa & Manuela Veloso & Danilo Mandic & Tucker Balch, 2019. "On the Importance of Opponent Modeling in Auction Markets," Papers 1911.12816, arXiv.org.
    11. Imen Azzouz & Wiem Fekih Hassen, 2023. "Optimization of Electric Vehicles Charging Scheduling Based on Deep Reinforcement Learning: A Decentralized Approach," Energies, MDPI, vol. 16(24), pages 1-18, December.
    12. Pupato, Germán, 2017. "Performance pay, trade and inequality," Journal of Economic Theory, Elsevier, vol. 172(C), pages 478-504.
    13. Jacob W. Crandall & Mayada Oudah & Tennom & Fatimah Ishowo-Oloko & Sherief Abdallah & Jean-François Bonnefon & Manuel Cebrian & Azim Shariff & Michael A. Goodrich & Iyad Rahwan, 2018. "Cooperating with machines," Nature Communications, Nature, vol. 9(1), pages 1-12, December.
      • Abdallah, Sherief & Bonnefon, Jean-François & Cebrian, Manuel & Crandall, Jacob W. & Ishowo-Oloko, Fatimah & Oudah, Mayada & Rahwan, Iyad & Shariff, Azim & Tennom,, 2017. "Cooperating with Machines," TSE Working Papers 17-806, Toulouse School of Economics (TSE).
      • Abdallah, Sherief & Bonnefon, Jean-François & Cebrian, Manuel & Crandall, Jacob W. & Ishowo-Oloko, Fatimah & Oudah, Mayada & Rahwan, Iyad & Shariff, Azim & Tennom,, 2017. "Cooperating with Machines," IAST Working Papers 17-68, Institute for Advanced Study in Toulouse (IAST).
      • Jacob Crandall & Mayada Oudah & Fatimah Ishowo-Oloko Tennom & Fatimah Ishowo-Oloko & Sherief Abdallah & Jean-François Bonnefon & Manuel Cebrian & Azim Shariff & Michael Goodrich & Iyad Rahwan, 2018. "Cooperating with machines," Post-Print hal-01897802, HAL.
    14. Sun, Alexander Y., 2020. "Optimal carbon storage reservoir management through deep reinforcement learning," Applied Energy, Elsevier, vol. 278(C).
    15. Luo, Yulei & Young, Eric R., 2016. "Induced uncertainty, market price of risk, and the dynamics of consumption and wealth," Journal of Economic Theory, Elsevier, vol. 163(C), pages 1-41.
    16. Yassine Chemingui & Adel Gastli & Omar Ellabban, 2020. "Reinforcement Learning-Based School Energy Management System," Energies, MDPI, vol. 13(23), pages 1-21, December.
    17. Martin Hellwig & Felix Bierbrauer, 2009. "Public Good Provision in a Large Economy," 2009 Meeting Papers 1062, Society for Economic Dynamics.
    18. Woo Jae Byun & Bumkyu Choi & Seongmin Kim & Joohyun Jo, 2023. "Practical Application of Deep Reinforcement Learning to Optimal Trade Execution," FinTech, MDPI, vol. 2(3), pages 1-16, June.
    19. Lu, Yu & Xiang, Yue & Huang, Yuan & Yu, Bin & Weng, Liguo & Liu, Junyong, 2023. "Deep reinforcement learning based optimal scheduling of active distribution system considering distributed generation, energy storage and flexible load," Energy, Elsevier, vol. 271(C).
    20. Yuhong Wang & Lei Chen & Hong Zhou & Xu Zhou & Zongsheng Zheng & Qi Zeng & Li Jiang & Liang Lu, 2021. "Flexible Transmission Network Expansion Planning Based on DQN Algorithm," Energies, MDPI, vol. 14(7), pages 1-21, April.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2207.00713. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.