Efficiency of coordinate descent methods on huge-scale optimization problems
In this paper we propose new methods for solving huge-scale optimization problems. For problems of this size, even the simplest full-dimensional vector operations are very expensive. Hence, we propose to apply an optimization technique based on random partial update of decision variables. For these methods, we prove the global estimates for the rate of convergence. Surprisingly enough, for certain classes of objective functions, our results are better than the standard worst-case bounds for deterministic algorithms. We present constrained and unconstrained versions of the method, and its accelerated variant. Our numerical test confirms a high efficiency of this technique on problems of very big size.
|Date of creation:||01 Jan 2010|
|Contact details of provider:|| Postal: Voie du Roman Pays 34, 1348 Louvain-la-Neuve (Belgium)|
Fax: +32 10474304
Web page: http://www.uclouvain.be/core
More information through EDIRC
When requesting a correction, please mention this item's handle: RePEc:cor:louvco:2010002. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Alain GILLIS)
If references are entirely missing, you can add them using this form.