Integrating Large Language Models and Reinforcement Learning for Sentiment-Driven Quantitative Trading
Author
Abstract
Suggested Citation
Download full text from publisher
References listed on IDEAS
- Xiao-Yang Liu & Guoxuan Wang & Hongyang Yang & Daochen Zha, 2023. "FinGPT: Democratizing Internet-scale Data for Financial Large Language Models," Papers 2307.10485, arXiv.org, revised Nov 2023.
- Paul Glasserman & Harry Mamaysky & Jimmy Qin, 2023. "New News is Bad News," Papers 2309.05560, arXiv.org.
- Ziyao Zhou & Ronitt Mehra, 2025. "An End-To-End LLM Enhanced Trading System," Papers 2502.01574, arXiv.org.
- Paul Glasserman & Caden Lin, 2023. "Assessing Look-Ahead Bias in Stock Return Predictions Generated By GPT Sentiment Analysis," Papers 2309.17322, arXiv.org.
Most related items
These are the items that most often cite the same works as this one and are cited by the same works as this one.- Sean Cao & Wei Jiang & Hui Xu, 2026. "Seeing the Goal, Missing the Truth: Human Accountability for AI Bias," Papers 2602.09504, arXiv.org.
- Yichen Luo & Yebo Feng & Jiahua Xu & Paolo Tasca & Yang Liu, 2025. "LLM-Powered Multi-Agent System for Automated Crypto Portfolio Management," Papers 2501.00826, arXiv.org, revised Jan 2025.
- Thanos Konstantinidis & Giorgos Iacovides & Mingxue Xu & Tony G. Constantinides & Danilo Mandic, 2024. "FinLlama: Financial Sentiment Classification for Algorithmic Trading Applications," Papers 2403.12285, arXiv.org.
- Liyuan Chen & Shuoling Liu & Jiangpeng Yan & Xiaoyu Wang & Henglin Liu & Chuang Li & Kecheng Jiao & Jixuan Ying & Yang Veronica Liu & Qiang Yang & Xiu Li, 2025. "Advancing Financial Engineering with Foundation Models: Progress, Applications, and Challenges," Papers 2507.18577, arXiv.org, revised Dec 2025.
- Can Celebi & Stefan Penczynski, 2024. "Using Large Language Models for Text Classification in Experimental Economics," Working Paper series, University of East Anglia, Centre for Behavioural and Experimental Social Science (CBESS) 24-01, School of Economics, University of East Anglia, Norwich, UK..
- Yutong Yan & Raphael Tang & Zhenyu Gao & Wenxi Jiang & Yao Lu, 2026. "DatedGPT: Preventing Lookahead Bias in Large Language Models with Time-Aware Pretraining," Papers 2603.11838, arXiv.org.
- Junyu Chen & Tom Boot & Lingwei Kong & Weining Wang, 2026. "Transformer-based CoVaR: Systemic Risk in Textual Information," Papers 2602.12490, arXiv.org.
- Hui Chen & Antoine Didisheim & Mohammad & Pourmohammadi & Luciano Somoza & Hanqing Tian, 2025. "A Financial Brain Scan of the LLM," Papers 2508.21285, arXiv.org, revised Feb 2026.
- Artur Kulpa & Grzegorz Wojarnik, 2025. "Prompt Engineering in Finance: An LLM-Based Multi-Agent Architecture for Decision Support," European Research Studies Journal, European Research Studies Journal, vol. 0(3), pages 1201-1217.
- Alex Kim & Maximilian Muhn & Valeri Nikolaev, 2024. "Financial Statement Analysis with Large Language Models," Papers 2407.17866, arXiv.org, revised Feb 2025.
- Wentao Zhang & Mingxuan Zhao & Jincheng Gao & Jieshun You & Huaiyu Jia & Yilei Zhao & Bo An & Shuo Sun, 2026. "AlphaForgeBench: Benchmarking End-to-End Trading Strategy Design with Large Language Models," Papers 2602.18481, arXiv.org.
- Mostapha Benhenda, 2026. "Look-Ahead-Bench: a Standardized Benchmark of Look-ahead Bias in Point-in-Time LLMs for Finance," Papers 2601.13770, arXiv.org.
- Julian Junyan Wang & Victor Xiaoqi Wang, 2025. "Assessing Consistency and Reproducibility in the Outputs of Large Language Models: Evidence Across Diverse Finance and Accounting Tasks," Papers 2503.16974, arXiv.org, revised Sep 2025.
- Yuan Li & Bingqiao Luo & Qian Wang & Nuo Chen & Xu Liu & Bingsheng He, 2024. "A Reflective LLM-based Agent to Guide Zero-shot Cryptocurrency Trading," Papers 2407.09546, arXiv.org.
- Songrun He & Linying Lv & Asaf Manela & Jimmy Wu, 2025. "Instruction Tuning Chronologically Consistent Language Models," Papers 2510.11677, arXiv.org, revised Nov 2025.
- Didisheim, Antoine & Fraschini, Martina & Somoza, Luciano, 2025. "AI’s predictable memory in financial analysis," Economics Letters, Elsevier, vol. 256(C).
- Shuaiyu Chen & T. Clifton Green & Huseyin Gulen & Dexin Zhou, 2024. "What Does ChatGPT Make of Historical Stock Returns? Extrapolation and Miscalibration in LLM Stock Return Forecasts," Papers 2409.11540, arXiv.org.
- Chen, Rui & Jiang, Haiqi & Guo, Tingyu & Fan, Chenyou, 2025. "Can Large Language Models forecast carbon price movements? Evidence from Chinese carbon markets," Research in International Business and Finance, Elsevier, vol. 77(PB).
- Breitung, Christian & Müller, Sebastian, 2025. "Global Business Networks," Journal of Financial Economics, Elsevier, vol. 166(C).
- Jun Han & Shuo Zhang & Wei Li & Zhi Yang & Yifan Dong & Tu Hu & Jialuo Yuan & Xiaomin Yu & Yumo Zhu & Fangqi Lou & Xin Guo & Zhaowei Liu & Tianyi Jiang & Ruichuan An & Jingping Liu & Biao Wu & Rongze , 2026. "QuantaAlpha: An Evolutionary Framework for LLM-Driven Alpha Mining," Papers 2602.07085, arXiv.org.
More about this item
NEP fields
This paper has been announced in the following NEP Reports:- NEP-CMP-2025-10-20 (Computational Economics)
Statistics
Access and download statisticsCorrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2510.10526. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .
Please note that corrections may take a couple of weeks to filter through the various RePEc services.
Printed from https://ideas.repec.org/p/arx/papers/2510.10526.html