IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2509.04541.html
   My bibliography  Save this paper

Finance-Grounded Optimization For Algorithmic Trading

Author

Listed:
  • Kasymkhan Khubiev
  • Mikhail Semenov
  • Irina Podlipnova

Abstract

Deep Learning is evolving fast and integrates into various domains. Finance is a challenging field for deep learning, especially in the case of interpretable artificial intelligence (AI). Although classical approaches perform very well with natural language processing, computer vision, and forecasting, they are not perfect for the financial world, in which specialists use different metrics to evaluate model performance. We first introduce financially grounded loss functions derived from key quantitative finance metrics, including the Sharpe ratio, Profit-and-Loss (PnL), and Maximum Draw down. Additionally, we propose turnover regularization, a method that inherently constrains the turnover of generated positions within predefined limits. Our findings demonstrate that the proposed loss functions, in conjunction with turnover regularization, outperform the traditional mean squared error loss for return prediction tasks when evaluated using algorithmic trading metrics. The study shows that financially grounded metrics enhance predictive performance in trading strategies and portfolio optimization.

Suggested Citation

  • Kasymkhan Khubiev & Mikhail Semenov & Irina Podlipnova, 2025. "Finance-Grounded Optimization For Algorithmic Trading," Papers 2509.04541, arXiv.org.
  • Handle: RePEc:arx:papers:2509.04541
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2509.04541
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Boyu Zhang & Hongyang Yang & Xiao-Yang Liu, 2023. "Instruct-FinGPT: Financial Sentiment Analysis by Instruction Tuning of General-Purpose Large Language Models," Papers 2306.12659, arXiv.org.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Thanos Konstantinidis & Giorgos Iacovides & Mingxue Xu & Tony G. Constantinides & Danilo Mandic, 2024. "FinLlama: Financial Sentiment Classification for Algorithmic Trading Applications," Papers 2403.12285, arXiv.org.
    2. Yijia Xiao & Edward Sun & Tong Chen & Fang Wu & Di Luo & Wei Wang, 2025. "Trading-R1: Financial Trading with LLM Reasoning via Reinforcement Learning," Papers 2509.11420, arXiv.org.
    3. Xiao-Yang Liu & Guoxuan Wang & Hongyang Yang & Daochen Zha, 2023. "FinGPT: Democratizing Internet-scale Data for Financial Large Language Models," Papers 2307.10485, arXiv.org, revised Nov 2023.
    4. Alonso-Robisco, Andres & Carbó, José Manuel, 2023. "Analysis of CBDC narrative by central banks using large language models," Finance Research Letters, Elsevier, vol. 58(PC).
    5. Giorgos Iacovides & Wuyang Zhou & Danilo Mandic, 2025. "FinDPO: Financial Sentiment Analysis for Algorithmic Trading through Preference Optimization of LLMs," Papers 2507.18417, arXiv.org.
    6. Chenghao Liu & Aniket Mahanti & Ranesh Naha & Guanghao Wang & Erwann Sbai, 2025. "Enhancing Cryptocurrency Sentiment Analysis with Multimodal Features," Papers 2508.15825, arXiv.org, revised Aug 2025.
    7. Yinheng Li & Shaofei Wang & Han Ding & Hang Chen, 2023. "Large Language Models in Finance: A Survey," Papers 2311.10723, arXiv.org, revised Jul 2024.
    8. Dong, Mengming Michael & Stratopoulos, Theophanis C. & Wang, Victor Xiaoqi, 2024. "A scoping review of ChatGPT research in accounting and finance," International Journal of Accounting Information Systems, Elsevier, vol. 55(C).
    9. Vasant Dhar & Jo~ao Sedoc, 2025. "DBOT: Artificial Intelligence for Systematic Long-Term Investing," Papers 2504.05639, arXiv.org.
    10. Masanori Hirano & Kentaro Imajo, 2024. "The Construction of Instruction-tuned LLMs for Finance without Instruction Data Using Continual Pretraining and Model Merging," Papers 2409.19854, arXiv.org.
    11. Yuqi Nie & Yaxuan Kong & Xiaowen Dong & John M. Mulvey & H. Vincent Poor & Qingsong Wen & Stefan Zohren, 2024. "A Survey of Large Language Models for Financial Applications: Progress, Prospects and Challenges," Papers 2406.11903, arXiv.org.
    12. Masanori Hirano & Kentaro Imajo, 2024. "Construction of Domain-specified Japanese Large Language Model for Finance through Continual Pre-training," Papers 2404.10555, arXiv.org.
    13. Zijie Zhao & Roy E. Welsch, 2024. "Hierarchical Reinforced Trader (HRT): A Bi-Level Approach for Optimizing Stock Selection and Execution," Papers 2410.14927, arXiv.org.
    14. Weilong Fu, 2025. "The New Quant: A Survey of Large Language Models in Financial Prediction and Trading," Papers 2510.05533, arXiv.org.
    15. Zihan Dong & Xinyu Fan & Zhiyuan Peng, 2024. "FNSPID: A Comprehensive Financial News Dataset in Time Series," Papers 2402.06698, arXiv.org.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2509.04541. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.