Author
Listed:
- Jeonggyu Huh
- Jaegi Jeon
- Hyeng Keun Koo
Abstract
Solving large-scale, continuous-time portfolio optimization problems involving numerous assets and state-dependent dynamics has long been challenged by the curse of dimensionality. Traditional dynamic programming and PDE-based methods, while rigorous, typically become computationally intractable beyond a few state variables ($\sim$3-6 limit in prior studies). To overcome this critical barrier, we introduce the \emph{Pontryagin-Guided Direct Policy Optimization} (PG-DPO) framework. PG-DPO leverages Pontryagin's Maximum Principle (PMP) and backpropagation-through-time (BPTT) to directly inform neural network-based policy learning. A key contribution is our highly efficient \emph{Projected PG-DPO (P-PGDPO)} variant. This approach uniquely utilizes BPTT to obtain rapidly stabilizing estimates of the Pontryagin costates and their crucial derivatives with respect to the state variables. These estimates are then analytically projected onto the manifold of optimal controls dictated by PMP's first-order conditions, significantly reducing training overhead and enhancing accuracy. This enables a breakthrough in scalability: numerical experiments demonstrate that P-PGDPO successfully tackles problems with dimensions previously considered far out of reach (up to 50 assets and 10 state variables). Critically, the framework accurately captures complex intertemporal hedging demands, a feat often elusive for other methods in high-dimensional settings. P-PGDPO delivers near-optimal policies, offering a practical and powerful alternative for a broad class of high-dimensional continuous-time control problems.
Suggested Citation
Jeonggyu Huh & Jaegi Jeon & Hyeng Keun Koo, 2025.
"Breaking the Dimensional Barrier: A Pontryagin-Guided Direct Policy Optimization for Continuous-Time Multi-Asset Portfolio,"
Papers
2504.11116, arXiv.org, revised May 2025.
Handle:
RePEc:arx:papers:2504.11116
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2504.11116. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.