Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor | Department of Applied Mathematics | en_US |
dc.contributor.advisor | Yang, Xiaoqi (AMA) | en_US |
dc.creator | Wu, Yuqia | - |
dc.identifier.uri | https://theses.lib.polyu.edu.hk/handle/200/13267 | - |
dc.language | English | en_US |
dc.publisher | Hong Kong Polytechnic University | en_US |
dc.rights | All rights reserved | en_US |
dc.title | Globally convergent regularized newton methods for nonconvex sparse optimization problems | en_US |
dcterms.abstract | Nowadays, many algorithms have been proposed to solve the nonconvex and nonsmooth problems that arise in sparse optimization. Most of these algorithms belong to the first-order type, including the proximal gradient method. First-order methods have several advantages, such as low computational cost in each iteration, weak global convergence conditions, and easy implementation. However, their convergence rate is at most linear, resulting in slow convergence speed when processing large-scale problems. On the other hand, the classical Newton method, which is a second-order method, can achieve a locally superlinear convergence rate. However, the classical Newton method equips with an Armijo line search for minimizing smooth optimization problems can only achieve a subsequence convergence, let alone for nonsmooth sparse optimization. By exploiting the structure of two classes of nonconvex and nonsmooth sparse optimization problems that arise in compressed sensing and machine learning, this thesis presents an efficient hybrid framework that combines a proximal gradient method and a Newton-type method, which takes advantages of these two kinds of optimization algorithms, and simultaneously avoids their disadvantages. | en_US |
dcterms.abstract | The first part of the thesis designs a hybrid of proximal gradient method and regularized subspace Newton method (HpgSRN) for solving ℓq (0<q<1)-norm regularized minimization problems with a twice continuously differentiable loss function. In the iterates of HpgSRN, we first use the proximal gradient method to find a neighbourhood of a potential stationary point, and then apply a regularized Newton method in the subspace, at which the objective is locally smooth, to enhance the convergence speed. We show that this hybrid algorithm finally reduces to a regularized Newton method of minimizing a locally smooth function. If the reduced objective function satisfies the Kurdyka-Lojasiewic property and a curve ratio condition holds, the generated sequence converges to an L-stationary point with an arbitrarily picked initial point. Moreover, if we additionally assume that the generated sequence converges to a second-order stationary point, and an error bound condition holds there, we prove a superlinear convergence of the generated sequence, without assuming either the isolatedness or the local minimality of the limit point. Numerical comparison with the proximal gradient method and ZeroFPR, where the later one is an algorithm using limited-memory BFGS method to minimize the forward-backward envelope of the objective function, indicates that our proposed HpgSRN not only converges much faster, but also yields comparable and even better solutions. | en_US |
dcterms.abstract | The second part of the thesis studies fused zero-norms regularization problems, which are the zero-norm version of the fused Lasso plus a box constraint. We propose a polynomial time algorithm to find an element of the proximal mapping of the fused zero-norms over a box constraint. Based on this, we propose a hybrid of proximal gradient method and inexact projected regularized Newton method for solving the fused zero-norms regularization problems. We prove that the algorithm finally reduces to an inexact projected regularized Newton method for seeking a critical point of a smooth function over a convex constraint. We achieve the convergence of the whole sequence under a nondegeneracy condition, a curve ratio condition and assuming that the reduced objective is a Kurdyka-Lojasiewic function. A superlinear convergence rate of the iterates is established under a locally Ho¨lderian error bound condition on a second-order stationary point set, without requiring either the isolatedness or the local optimality of the limit point. Finally, numerical experiments show the features of our considered model, and the superiority of our proposed algorithm. | en_US |
dcterms.extent | xvi, 136 pages : color illustrations | en_US |
dcterms.isPartOf | PolyU Electronic Theses | en_US |
dcterms.issued | 2024 | en_US |
dcterms.educationalLevel | Ph.D. | en_US |
dcterms.educationalLevel | All Doctorate | en_US |
dcterms.LCSH | Mathematical optimization | en_US |
dcterms.LCSH | Newton-Raphson method | en_US |
dcterms.LCSH | Algorithms | en_US |
dcterms.LCSH | Hong Kong Polytechnic University -- Dissertations | en_US |
dcterms.accessRights | open access | en_US |
Copyright Undertaking
As a bona fide Library user, I declare that:
- I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
- I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
- I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.
By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.
Please use this identifier to cite or link to this item:
https://theses.lib.polyu.edu.hk/handle/200/13267