Full metadata record
DC FieldValueLanguage
dc.contributorDepartment of Applied Mathematicsen_US
dc.contributor.advisorYang, Xiaoqi (AMA)en_US
dc.creatorWu, Yuqia-
dc.identifier.urihttps://theses.lib.polyu.edu.hk/handle/200/13267-
dc.languageEnglishen_US
dc.publisherHong Kong Polytechnic Universityen_US
dc.rightsAll rights reserveden_US
dc.titleGlobally convergent regularized newton methods for nonconvex sparse optimization problemsen_US
dcterms.abstractNowadays, many algorithms have been proposed to solve the nonconvex and nonsmooth problems that arise in sparse optimization. Most of these algorithms belong to the first-order type, including the proximal gradient method. First-order methods have several advantages, such as low computational cost in each iteration, weak global convergence conditions, and easy implementation. However, their convergence rate is at most linear, resulting in slow convergence speed when processing large-scale problems. On the other hand, the classical Newton method, which is a second-order method, can achieve a locally superlinear convergence rate. However, the classical Newton method equips with an Armijo line search for minimizing smooth optimization problems can only achieve a subsequence convergence, let alone for nonsmooth sparse optimization. By exploiting the structure of two classes of nonconvex and nonsmooth sparse optimization problems that arise in compressed sensing and machine learning, this thesis presents an efficient hybrid framework that combines a proximal gradient method and a Newton-type method, which takes advantages of these two kinds of optimization algorithms, and simultaneously avoids their disadvantages.en_US
dcterms.abstractThe first part of the thesis designs a hybrid of proximal gradient method and regularized subspace Newton method (HpgSRN) for solving ℓq (0<q<1)-norm regularized minimization problems with a twice continuously differentiable loss function. In the iterates of HpgSRN, we first use the proximal gradient method to find a neighbourhood of a potential stationary point, and then apply a regularized Newton method in the subspace, at which the objective is locally smooth, to enhance the convergence speed. We show that this hybrid algorithm finally reduces to a regularized Newton method of minimizing a locally smooth function. If the reduced objective function satisfies the Kurdyka-Lojasiewic property and a curve ratio condition holds, the generated sequence converges to an L-stationary point with an arbitrarily picked initial point. Moreover, if we additionally assume that the generated sequence converges to a second-order stationary point, and an error bound condition holds there, we prove a superlinear convergence of the generated sequence, without assuming either the isolatedness or the local minimality of the limit point. Numerical comparison with the proximal gradient method and ZeroFPR, where the later one is an algorithm using limited-memory BFGS method to minimize the forward-backward envelope of the objective function, indicates that our proposed HpgSRN not only converges much faster, but also yields comparable and even better solutions.en_US
dcterms.abstractThe second part of the thesis studies fused zero-norms regularization problems, which are the zero-norm version of the fused Lasso plus a box constraint. We propose a polynomial time algorithm to find an element of the proximal mapping of the fused zero-norms over a box constraint. Based on this, we propose a hybrid of proximal gradient method and inexact projected regularized Newton method for solving the fused zero-norms regularization problems. We prove that the algorithm finally reduces to an inexact projected regularized Newton method for seeking a critical point of a smooth function over a convex constraint. We achieve the convergence of the whole sequence under a nondegeneracy condition, a curve ratio condition and assuming that the reduced objective is a Kurdyka-Lojasiewic function. A superlinear convergence rate of the iterates is established under a locally Ho¨lderian error bound condition on a second-order stationary point set, without requiring either the isolatedness or the local optimality of the limit point. Finally, numerical experiments show the features of our considered model, and the superiority of our proposed algorithm.en_US
dcterms.extentxvi, 136 pages : color illustrationsen_US
dcterms.isPartOfPolyU Electronic Thesesen_US
dcterms.issued2024en_US
dcterms.educationalLevelPh.D.en_US
dcterms.educationalLevelAll Doctorateen_US
dcterms.LCSHMathematical optimizationen_US
dcterms.LCSHNewton-Raphson methoden_US
dcterms.LCSHAlgorithmsen_US
dcterms.LCSHHong Kong Polytechnic University -- Dissertationsen_US
dcterms.accessRightsopen accessen_US

Files in This Item:
File Description SizeFormat 
7713.pdfFor All Users1.08 MBAdobe PDFView/Open


Copyright Undertaking

As a bona fide Library user, I declare that:

  1. I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
  2. I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
  3. I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.

By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.

Show simple item record

Please use this identifier to cite or link to this item: https://theses.lib.polyu.edu.hk/handle/200/13267