DC FieldValueLanguage
dc.contributorDepartment of Applied Mathematicsen_US
dc.creatorWang, Hong-
dc.identifier.urihttps://theses.lib.polyu.edu.hk/handle/200/8818-
dc.languageEnglishen_US
dc.publisherHong Kong Polytechnic University-
dc.titleSecond-order methods for nonconvex optimization : theory and complexity analysisen_US
dcterms.abstractWe consider the second-order methods for solving two classes of nonconvex minimization problem arising from diverse applications of optimization. By second-order methods, we refer to those methods involving second-order information of the objective function. The first class of nonconvex problem is the so-called Affine Rank Minimization Problem (ARMP), whose aim is to minimize the rank of a matrix over a given affine set. The other one is the Partially Separable Minimization Problem (PSMP), which is to minimize the objective function with a partially separable structure over a given convex set. This thesis hence can be sharply divided into two distinct parts. In the first part, we focus on exploring the ARMP utilizing the matrix factorization reformulation. Under some particular situations, we show that the corresponding factorization models are of the property that all second-order stationary points are global minimizers. By presuming such property holds, we propose an algorithm framework which outputs the global solution of the ARMP after solving a series of its factorization models with different ranks to the second-order necessary optimality. Finally, we put forward a conjecture that the reduction between the global minima of the low-rank approximation with consecutive ranks is monotonically decreasing with the increase of the rank. If this conjecture holds, we can accelerate the estimation of the optimal rank by an adaptive technique, and hence significantly increase the efficiency of the overall performance of our framework in solving ARMP. In the second part of this thesis, we mainly study the PSMP over a convex constraint. We first propose an adaptive regularization algorithm for solving PSMP, in which the expense of using high-order models is mitigated by the use of the partially separable structure. We then show that the algorithm using an order p model needs at most O(ε -p+1/p) evaluations of the objective function and its derivatives to arrive at an ε-approximate first-order stationary point. The complexity in terms of c is unaffected by the use of structure. An extension of the main idea is also presented for the case where the objective function might be non-Lipschitz continuous. We apply the algorithm with an adaptive cubic regularization term to solving the problem of data fitting involving the q-quasi norm for q ε(0, 1) and it turns out that even for non-Lipschitz case, the complexity bound O(ε -3/2) can be retained.en_US
dcterms.extentxv, 139 pages : illustrationsen_US
dcterms.issued2016en_US
dcterms.educationalLevelAll Doctorateen_US
dcterms.educationalLevelPh.D.en_US
dcterms.LCSHHong Kong Polytechnic University -- Dissertationsen_US
dcterms.LCSHMathematical optimization.en_US
dcterms.accessRightsopen accessen_US

Files in This Item:
File Description SizeFormat

Please use this identifier to cite or link to this item: `https://theses.lib.polyu.edu.hk/handle/200/8818`