IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2601.04896.html

Deep Reinforcement Learning for Optimum Order Execution: Mitigating Risk and Maximizing Returns

Author

Listed:
  • Khabbab Zakaria
  • Jayapaulraj Jerinsh
  • Andreas Maier
  • Patrick Krauss
  • Stefano Pasquali
  • Dhagash Mehta

Abstract

Optimal Order Execution is a well-established problem in finance that pertains to the flawless execution of a trade (buy or sell) for a given volume within a specified time frame. This problem revolves around optimizing returns while minimizing risk, yet recent research predominantly focuses on addressing one aspect of this challenge. In this paper, we introduce an innovative approach to Optimal Order Execution within the US market, leveraging Deep Reinforcement Learning (DRL) to effectively address this optimization problem holistically. Our study assesses the performance of our model in comparison to two widely employed execution strategies: Volume Weighted Average Price (VWAP) and Time Weighted Average Price (TWAP). Our experimental findings clearly demonstrate that our DRL-based approach outperforms both VWAP and TWAP in terms of return on investment and risk management. The model's ability to adapt dynamically to market conditions, even during periods of market stress, underscores its promise as a robust solution.

Suggested Citation

  • Khabbab Zakaria & Jayapaulraj Jerinsh & Andreas Maier & Patrick Krauss & Stefano Pasquali & Dhagash Mehta, 2026. "Deep Reinforcement Learning for Optimum Order Execution: Mitigating Risk and Maximizing Returns," Papers 2601.04896, arXiv.org, revised Jan 2026.
  • Handle: RePEc:arx:papers:2601.04896
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2601.04896
    File Function: Latest version
    Download Restriction: no
    ---><---

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2601.04896. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.