Randomized Optimal Stopping Problem in Continuous time and Reinforcement Learning Algorithm
Author
Abstract
Suggested Citation
Download full text from publisher
References listed on IDEAS
- Longstaff, Francis A & Schwartz, Eduardo S, 2001. "Valuing American Options by Simulation: A Simple Least-Squares Approach," The Review of Financial Studies, Society for Financial Studies, vol. 14(1), pages 113-147.
- Sebastian Becker & Patrick Cheridito & Arnulf Jentzen & Timo Welti, 2019. "Solving high-dimensional optimal stopping problems using deep learning," Papers 1908.01602, arXiv.org, revised Aug 2021.
- Dieter Hendricks & Diane Wilcox, 2014. "A reinforcement learning extension to the Almgren-Chriss model for optimal trade execution," Papers 1403.2229, arXiv.org.
- David Silver & Aja Huang & Chris J. Maddison & Arthur Guez & Laurent Sifre & George van den Driessche & Julian Schrittwieser & Ioannis Antonoglou & Veda Panneershelvam & Marc Lanctot & Sander Dieleman, 2016. "Mastering the game of Go with deep neural networks and tree search," Nature, Nature, vol. 529(7587), pages 484-489, January.
- David Silver & Julian Schrittwieser & Karen Simonyan & Ioannis Antonoglou & Aja Huang & Arthur Guez & Thomas Hubert & Lucas Baker & Matthew Lai & Adrian Bolton & Yutian Chen & Timothy Lillicrap & Fan , 2017. "Mastering the game of Go without human knowledge," Nature, Nature, vol. 550(7676), pages 354-359, October.
- Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
- Longstaff, Francis A & Schwartz, Eduardo S, 2001. "Valuing American Options by Simulation: A Simple Least-Squares Approach," University of California at Los Angeles, Anderson Graduate School of Management qt43n1k4jb, Anderson Graduate School of Management, UCLA.
Most related items
These are the items that most often cite the same works as this one and are cited by the same works as this one.- Ben Hambly & Renyuan Xu & Huining Yang, 2021. "Recent Advances in Reinforcement Learning in Finance," Papers 2112.04553, arXiv.org, revised Feb 2023.
- Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
- A. Max Reppen & H. Mete Soner & Valentin Tissot-Daguette, 2022. "Deep Stochastic Optimization in Finance," Papers 2205.04604, arXiv.org.
- Xuwei Yang & Anastasis Kratsios & Florian Krach & Matheus Grasselli & Aurelien Lucchi, 2023. "Regret-Optimal Federated Transfer Learning for Kernel Regression with Applications in American Option Pricing," Papers 2309.04557, arXiv.org, revised Oct 2024.
- Sebastian Becker & Patrick Cheridito & Arnulf Jentzen, 2020. "Pricing and Hedging American-Style Options with Deep Learning," JRFM, MDPI, vol. 13(7), pages 1-12, July.
- Zhang, Yihao & Chai, Zhaojie & Lykotrafitis, George, 2021. "Deep reinforcement learning with a particle dynamics environment applied to emergency evacuation of a room with obstacles," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 571(C).
- Erhan Bayraktar & Qi Feng & Zhaoyu Zhang, 2022. "Deep Signature Algorithm for Multi-dimensional Path-Dependent Options," Papers 2211.11691, arXiv.org, revised Jan 2024.
- A. Max Reppen & H. Mete Soner & Valentin Tissot-Daguette, 2023. "Deep stochastic optimization in finance," Digital Finance, Springer, vol. 5(1), pages 91-111, March.
- Weifan Long & Taixian Hou & Xiaoyi Wei & Shichao Yan & Peng Zhai & Lihua Zhang, 2023. "A Survey on Population-Based Deep Reinforcement Learning," Mathematics, MDPI, vol. 11(10), pages 1-17, May.
- Yifeng Guo & Xingyu Fu & Yuyan Shi & Mingwen Liu, 2018. "Robust Log-Optimal Strategy with Reinforcement Learning," Papers 1805.00205, arXiv.org.
- Shuo Sun & Rundong Wang & Bo An, 2021. "Reinforcement Learning for Quantitative Trading," Papers 2109.13851, arXiv.org.
- Shijun Wang & Baocheng Zhu & Chen Li & Mingzhe Wu & James Zhang & Wei Chu & Yuan Qi, 2020. "Riemannian Proximal Policy Optimization," Computer and Information Science, Canadian Center of Science and Education, vol. 13(3), pages 1-93, August.
- Lukas Gonon, 2022. "Deep neural network expressivity for optimal stopping problems," Papers 2210.10443, arXiv.org.
- Beatriz Salvador & Cornelis W. Oosterlee & Remco van der Meer, 2020.
"Financial Option Valuation by Unsupervised Learning with Artificial Neural Networks,"
Mathematics, MDPI, vol. 9(1), pages 1-20, December.
- Beatriz Salvador & Cornelis W. Oosterlee & Remco van der Meer, 2020. "Financial option valuation by unsupervised learning with artificial neural networks," Papers 2005.12059, arXiv.org.
- Iwao Maeda & David deGraw & Michiharu Kitano & Hiroyasu Matsushima & Hiroki Sakaji & Kiyoshi Izumi & Atsuo Kato, 2020. "Deep Reinforcement Learning in Agent Based Financial Market Simulation," JRFM, MDPI, vol. 13(4), pages 1-17, April.
- Li, Wenqing & Ni, Shaoquan, 2022. "Train timetabling with the general learning environment and multi-agent deep reinforcement learning," Transportation Research Part B: Methodological, Elsevier, vol. 157(C), pages 230-251.
- Zoran Stoiljkovic, 2023. "Applying Reinforcement Learning to Option Pricing and Hedging," Papers 2310.04336, arXiv.org.
- Bo Hu & Jiaxi Li & Shuang Li & Jie Yang, 2019. "A Hybrid End-to-End Control Strategy Combining Dueling Deep Q-network and PID for Transient Boost Control of a Diesel Engine with Variable Geometry Turbocharger and Cooled EGR," Energies, MDPI, vol. 12(19), pages 1-15, September.
- Elsisi, Mahmoud & Amer, Mohammed & Dababat, Alya’ & Su, Chun-Lien, 2023. "A comprehensive review of machine learning and IoT solutions for demand side energy management, conservation, and resilient operation," Energy, Elsevier, vol. 281(C).
- A. Max Reppen & H. Mete Soner & Valentin Tissot-Daguette, 2022. "Neural Optimal Stopping Boundary," Papers 2205.04595, arXiv.org, revised May 2023.
More about this item
NEP fields
This paper has been announced in the following NEP Reports:- NEP-CMP-2022-09-19 (Computational Economics)
Statistics
Access and download statisticsCorrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2208.02409. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .
Please note that corrections may take a couple of weeks to filter through the various RePEc services.