A Deep Reinforcement Learning Trader without Offline Training
Author
Abstract
Suggested Citation
Download full text from publisher
References listed on IDEAS
- Oriol Vinyals & Igor Babuschkin & Wojciech M. Czarnecki & Michaël Mathieu & Andrew Dudzik & Junyoung Chung & David H. Choi & Richard Powell & Timo Ewalds & Petko Georgiev & Junhyuk Oh & Dan Horgan & M, 2019. "Grandmaster level in StarCraft II using multi-agent reinforcement learning," Nature, Nature, vol. 575(7782), pages 350-354, November.
- Adrian Millea, 2021. "Deep Reinforcement Learning for Trading—A Critical Survey," Data, MDPI, vol. 6(11), pages 1-25, November.
- David Silver & Aja Huang & Chris J. Maddison & Arthur Guez & Laurent Sifre & George van den Driessche & Julian Schrittwieser & Ioannis Antonoglou & Veda Panneershelvam & Marc Lanctot & Sander Dieleman, 2016. "Mastering the game of Go with deep neural networks and tree search," Nature, Nature, vol. 529(7587), pages 484-489, January.
Most related items
These are the items that most often cite the same works as this one and are cited by the same works as this one.- Weifan Long & Taixian Hou & Xiaoyi Wei & Shichao Yan & Peng Zhai & Lihua Zhang, 2023. "A Survey on Population-Based Deep Reinforcement Learning," Mathematics, MDPI, vol. 11(10), pages 1-17, May.
- Shuo Sun & Rundong Wang & Bo An, 2021. "Reinforcement Learning for Quantitative Trading," Papers 2109.13851, arXiv.org.
- Lydia Tsiami & Christos Makropoulos & Dragan Savic, 2025. "Rethinking Urban Water Network Design: A Reinforcement Learning Framework for Long-Term Flexible Planning," Water Resources Management: An International Journal, Published for the European Water Resources Association (EWRA), Springer;European Water Resources Association (EWRA), vol. 39(13), pages 7155-7174, October.
- Li, Wenqing & Ni, Shaoquan, 2022. "Train timetabling with the general learning environment and multi-agent deep reinforcement learning," Transportation Research Part B: Methodological, Elsevier, vol. 157(C), pages 230-251.
- János Kramár & Tom Eccles & Ian Gemp & Andrea Tacchetti & Kevin R. McKee & Mateusz Malinowski & Thore Graepel & Yoram Bachrach, 2022. "Negotiation and honesty in artificial intelligence methods for the board game of Diplomacy," Nature Communications, Nature, vol. 13(1), pages 1-15, December.
- Jin, Jiahuan & Cui, Tianxiang & Bai, Ruibin & Qu, Rong, 2024. "Container port truck dispatching optimization using Real2Sim based deep reinforcement learning," European Journal of Operational Research, Elsevier, vol. 315(1), pages 161-175.
- Cui, Tianxiang & Du, Nanjiang & Yang, Xiaoying & Ding, Shusheng, 2024. "Multi-period portfolio optimization using a deep reinforcement learning hyper-heuristic approach," Technological Forecasting and Social Change, Elsevier, vol. 198(C).
- Su, Yang & Yang, Hai, 2025. "Enhancing feeder bus service coverage with Multi-Agent Reinforcement Learning: A case study in Hong Kong," Transportation Research Part E: Logistics and Transportation Review, Elsevier, vol. 196(C).
- Weichao Mao & Tamer Başar, 2023. "Provably Efficient Reinforcement Learning in Decentralized General-Sum Markov Games," Dynamic Games and Applications, Springer, vol. 13(1), pages 165-186, March.
- Christoph Graf & Viktor Zobernig & Johannes Schmidt & Claude Klöckl, 2024. "Computational Performance of Deep Reinforcement Learning to Find Nash Equilibria," Computational Economics, Springer;Society for Computational Economics, vol. 63(2), pages 529-576, February.
- Liu, Bokai & Wang, Yizheng & Rabczuk, Timon & Olofsson, Thomas & Lu, Weizhuo, 2024. "Multi-scale modeling in thermal conductivity of Polyurethane incorporated with Phase Change Materials using Physics-Informed Neural Networks," Renewable Energy, Elsevier, vol. 220(C).
- Zhang, Qin & Liu, Yu & Xiang, Yisha & Xiahou, Tangfan, 2024. "Reinforcement learning in reliability and maintenance optimization: A tutorial," Reliability Engineering and System Safety, Elsevier, vol. 251(C).
- Yang, Junjiao & Hu, Zhan-Chao, 2025. "Deep reinforcement learning for optimizing the thermoacoustic core in a supercritical CO2 thermoacoustic engine," Energy, Elsevier, vol. 325(C).
- Wang, Xin & Liu, Shuo & Yu, Yifan & Yue, Shengzhi & Liu, Ying & Zhang, Fumin & Lin, Yuanshan, 2023. "Modeling collective motion for fish schooling via multi-agent reinforcement learning," Ecological Modelling, Elsevier, vol. 477(C).
- Li, Shiyao & Zhou, Yue & Wu, Jianzhong & Pan, Yiqun & Huang, Zhizhong & Zhou, Nan, 2025. "A digital twin of multiple energy hub systems with peer-to-peer energy sharing," Applied Energy, Elsevier, vol. 380(C).
- Tian Zhu & Merry H. Ma, 2022. "Deriving the Optimal Strategy for the Two Dice Pig Game via Reinforcement Learning," Stats, MDPI, vol. 5(3), pages 1-14, August.
- Xiaoyue Li & John M. Mulvey, 2023. "Optimal Portfolio Execution in a Regime-switching Market with Non-linear Impact Costs: Combining Dynamic Program and Neural Network," Papers 2306.08809, arXiv.org.
- Pedro Afonso Fernandes, 2024. "Forecasting with Neuro-Dynamic Programming," Papers 2404.03737, arXiv.org.
- Nathan Companez & Aldeida Aleti, 2016. "Can Monte-Carlo Tree Search learn to sacrifice?," Journal of Heuristics, Springer, vol. 22(6), pages 783-813, December.
- Yuchen Zhang & Wei Yang, 2022. "Breakthrough invention and problem complexity: Evidence from a quasi‐experiment," Strategic Management Journal, Wiley Blackwell, vol. 43(12), pages 2510-2544, December.
More about this item
NEP fields
This paper has been announced in the following NEP Reports:- NEP-BIG-2023-04-10 (Big Data)
- NEP-CMP-2023-04-10 (Computational Economics)
Statistics
Access and download statisticsCorrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2303.00356. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .
Please note that corrections may take a couple of weeks to filter through the various RePEc services.
Printed from https://ideas.repec.org/p/arx/papers/2303.00356.html