Author
Listed:
- David Newton
(Department of Statistics, Purdue University, West Lafayette, Indiana 47906)
- Raghu Bollapragada
(Department of Mechanical Engineering, The University of Texas at Austin, Austin, Texas 78712)
- Raghu Pasupathy
(Department of Statistics, Purdue University, West Lafayette, Indiana 47906; and Department of Computer Science and Engineering, Indian Institute of Technology Madras, Chennai 600036, India)
- Nung Kwan Yip
(Department of Mathematics, Purdue University, West Lafayette, Indiana 47906)
Abstract
Stochastic Gradient (SG) is the de facto iterative technique to solve stochastic optimization (SO) problems with a smooth (nonconvex) objective f and a stochastic first-order oracle. SG’s attractiveness is due in part to its simplicity of executing a single step along the negative subsampled gradient direction to update the incumbent iterate. In this paper, we question SG’s choice of executing a single step as opposed to multiple steps between subsample updates. Our investigation leads naturally to generalizing SG into Retrospective Approximation (RA), where, during each iteration, a “deterministic solver” executes possibly multiple steps on a subsampled deterministic problem and stops when further solving is deemed unnecessary from the standpoint of statistical efficiency. RA thus formalizes what is appealing for implementation—during each iteration, “plug in” a solver—for example, L-BFGS line search or Newton-CG— as is , and solve only to the extent necessary. We develop a complete theory using relative error of the observed gradients as the principal object, demonstrating that almost sure and L 1 consistency of RA are preserved under especially weak conditions when sample sizes are increased at appropriate rates. We also characterize the iteration and oracle complexity (for linear and sublinear solvers) of RA and identify a practical termination criterion leading to optimal complexity rates. To subsume nonconvex f , we present a certain “random central limit theorem” that incorporates the effect of curvature across all first-order critical points, demonstrating that the asymptotic behavior is described by a certain mixture of normals. The message from our numerical experiments is that the ability of RA to incorporate existing second-order deterministic solvers in a strategic manner might be important from the standpoint of dispensing with hyper-parameter tuning.
Suggested Citation
David Newton & Raghu Bollapragada & Raghu Pasupathy & Nung Kwan Yip, 2025.
"A Retrospective Approximation Approach for Smooth Stochastic Optimization,"
Mathematics of Operations Research, INFORMS, vol. 50(3), pages 2301-2332, August.
Handle:
RePEc:inm:ormoor:v:50:y:2025:i:3:p:2301-2332
DOI: 10.1287/moor.2022.0136
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:inm:ormoor:v:50:y:2025:i:3:p:2301-2332. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Chris Asher (email available below). General contact details of provider: https://edirc.repec.org/data/inforea.html .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.