Author
Listed:
- Dong-Young Lim
(Department of Industrial Engineering, Ulsan National Institute of Science and Technology, Ulsan 44919, South Korea)
- Ariel Neufeld
(Division of Mathematical Sciences, Nanyang Technological University, 637371 Singapore)
- Sotirios Sabanis
(School of Mathematics, The University of Edinburgh, Edinburgh EH9 3FD, United Kingdom; and The Alan Turing Institute, London NW1 2DB, United Kingdom; and National Technical University of Athens, 10682 Athens, Greece)
- Ying Zhang
(Financial Technology Thrust, Society Hub, The Hong Kong University of Science and Technology Guangzhou, Guangzhou, China)
Abstract
We introduce a new Langevin dynamics based algorithm, called the extended tamed hybrid ε -order polygonal unadjusted Langevin algorithm (e-TH ε O POULA), to solve optimization problems with discontinuous stochastic gradients, which naturally appear in real-world applications such as quantile estimation, vector quantization, conditional value at risk (CVaR) minimization, and regularized optimization problems involving rectified linear unit (ReLU) neural networks. We demonstrate both theoretically and numerically the applicability of the e-TH ε O POULA algorithm. More precisely, under the conditions that the stochastic gradient is locally Lipschitz in average and satisfies a certain convexity at infinity condition, we establish nonasymptotic error bounds for e-TH ε O POULA in Wasserstein distances and provide a nonasymptotic estimate for the expected excess risk, which can be controlled to be arbitrarily small. Three key applications in finance and insurance are provided, namely, multiperiod portfolio optimization, transfer learning in multiperiod portfolio optimization, and insurance claim prediction, which involve neural networks with (Leaky)-ReLU activation functions. Numerical experiments conducted using real-world data sets illustrate the superior empirical performance of e-TH ε O POULA compared with SGLD (stochastic gradient Langevin dynamics), TUSLA (tamed unadjusted stochastic Langevin algorithm), adaptive moment estimation, and Adaptive Moment Estimation with a Strongly Non-Convex Decaying Learning Rate in terms of model accuracy.
Suggested Citation
Dong-Young Lim & Ariel Neufeld & Sotirios Sabanis & Ying Zhang, 2025.
"Langevin Dynamics Based Algorithm e-TH ε O POULA for Stochastic Optimization Problems with Discontinuous Stochastic Gradient,"
Mathematics of Operations Research, INFORMS, vol. 50(3), pages 2333-2374, August.
Handle:
RePEc:inm:ormoor:v:50:y:2025:i:3:p:2333-2374
DOI: 10.1287/moor.2022.0307
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:inm:ormoor:v:50:y:2025:i:3:p:2333-2374. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Chris Asher (email available below). General contact details of provider: https://edirc.repec.org/data/inforea.html .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.