Descentwise inexact proximal algorithms for smooth optimization
The proximal method is a standard regularization approach in optimization. Practical implementations of this algorithm require (i) an algorithm to compute the proximal point, (ii) a rule to stop this algorithm, (iii) an update formula for the proximal parameter. In this work we focus on (ii), when smoothness is present—so that Newton-like methods can be used for (i): we aim at giving adequate stopping rules to reach overall efficiency of the method. Roughly speaking, usual rules consist in stopping inner iterations when the current iterate is close to the proximal point. By contrast, we use the standard paradigm of numerical optimization: the basis for our stopping test is a “sufficient” decrease of the objective function, namely a fraction of the ideal decrease. We establish convergence of the algorithm thus obtained and we illustrate it on some ill-conditioned problems. The experiments show that combining the proposed inexact proximal scheme with a standard smooth optimization algorithm improves the numerical behaviour of the latter for those ill-conditioned problems. Copyright Springer Science+Business Media, LLC 2012
Volume (Year): 53 (2012)
Issue (Month): 3 (December)
|Contact details of provider:|| Web page: http://www.springer.com|
|Order Information:||Web: http://www.springer.com/math/journal/10589|
When requesting a correction, please mention this item's handle: RePEc:spr:coopap:v:53:y:2012:i:3:p:755-769. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Sonal Shukla)or (Rebekah McClure)
If references are entirely missing, you can add them using this form.