IDEAS home Printed from https://ideas.repec.org/a/inm/ormoor/v47y2022i1p690-719.html
   My bibliography  Save this article

A Variable Sample-Size Stochastic Quasi-Newton Method for Smooth and Nonsmooth Stochastic Convex Optimization

Author

Listed:
  • Afrooz Jalilzadeh

    (Industrial and Manufacturing Engineering, Pennsylvania State University, State College, Pennsylvania 16802)

  • Angelia Nedić

    (School of Electrical, Computer, and Energy Engineering, Arizona State University, Tempe, Arizona 85287)

  • Uday V. Shanbhag

    (Industrial and Manufacturing Engineering, Pennsylvania State University, State College, Pennsylvania 16802)

  • Farzad Yousefian

    (School of Industrial Engineering and Management, Oklahoma State University, Stillwater, Oklahoma 74078)

Abstract

Classical theory for quasi-Newton schemes has focused on smooth, deterministic, unconstrained optimization, whereas recent forays into stochastic convex optimization have largely resided in smooth, unconstrained, and strongly convex regimes. Naturally, there is a compelling need to address nonsmoothness, the lack of strong convexity, and the presence of constraints. Accordingly, this paper presents a quasi-Newton framework that can process merely convex and possibly nonsmooth (but smoothable) stochastic convex problems. We propose a framework that combines iterative smoothing and regularization with a variance-reduced scheme reliant on using an increasing sample size of gradients. We make the following contributions. (i) We develop a regularized and smoothed variable sample-size BFGS update (rsL-BFGS) that generates a sequence of Hessian approximations and can accommodate nonsmooth convex objectives by utilizing iterative regularization and smoothing. (ii) In strongly convex regimes with state-dependent noise, the proposed variable sample-size stochastic quasi-Newton (VS-SQN) scheme admits a nonasymptotic linear rate of convergence, whereas the oracle complexity of computing an ϵ -solution is O ( κ m + 1 / ϵ ) , where κ denotes the condition number and m ≥ 1 . In nonsmooth (but smoothable) regimes, using Moreau smoothing retains the linear convergence rate for the resulting smoothed VS-SQN (or sVS-SQN) scheme. Notably, the nonsmooth regime allows for accommodating convex constraints. To contend with the possible unavailability of Lipschitzian and strong convexity parameters, we also provide sublinear rates for diminishing step-length variants that do not rely on the knowledge of such parameters. (iii) In merely convex but smooth settings, the regularized VS-SQN scheme rVS-SQN displays a rate of O ( 1 / k ( 1 − ϵ ) ) with an oracle complexity of O ( 1 / ϵ 3 ) . When the smoothness requirements are weakened, the rate for the regularized and smoothed VS-SQN scheme rsVS-SQN worsens to O ( k − 1 / 3 ) . Such statements allow for a state-dependent noise assumption under a quadratic growth property on the objective. To the best of our knowledge, the rate results are among the first available rates for QN methods in nonsmooth regimes. Preliminary numerical evidence suggests that the schemes compare well with accelerated gradient counterparts on selected problems in stochastic optimization and machine learning with significant benefits in ill-conditioned regimes.

Suggested Citation

  • Afrooz Jalilzadeh & Angelia Nedić & Uday V. Shanbhag & Farzad Yousefian, 2022. "A Variable Sample-Size Stochastic Quasi-Newton Method for Smooth and Nonsmooth Stochastic Convex Optimization," Mathematics of Operations Research, INFORMS, vol. 47(1), pages 690-719, February.
  • Handle: RePEc:inm:ormoor:v:47:y:2022:i:1:p:690-719
    DOI: 10.1287/moor.2021.1147
    as

    Download full text from publisher

    File URL: http://dx.doi.org/10.1287/moor.2021.1147
    Download Restriction: no

    File URL: https://libkey.io/10.1287/moor.2021.1147?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:inm:ormoor:v:47:y:2022:i:1:p:690-719. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Chris Asher (email available below). General contact details of provider: https://edirc.repec.org/data/inforea.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.