|Title:||Calculus of Kurdyka-Lojasiewicz exponents and its applications in the analysis of first-order methods|
|Advisors:||Pong, Ting Kei (AMA)|
Hong Kong Polytechnic University -- Dissertations
|Department:||Department of Applied Mathematics|
|Pages:||xviii, 169 pages : color illustrations|
|Abstract:||In this thesis, we study calculus rules of the Kurdyka-Lojasiewicz (KL) exponents and show how KL exponents are applied in analyzing first-order methods for widely used optimization problems. First, we focus on calculus rules that derive the KL exponents of new functions from functions with known KL exponents. These include deriving the KL exponent of the inf-projection of a function from that of its original function, the KL exponent of the sum of a continuous function and the indicator function defined by a set of constraints from that of its Lagrangian and the KL exponent of a fractional function from the difference between the numerator and (a suitable scaling of) the denominator. Using these rules, we derive explicit KL exponents of some concrete optimization models such as the fractional model in [115,116], the model of minimizing l1 subject to logistic/Poisson loss, some semidefinite-programming-representable functions and some functions with C2-cone reducible structures.|
Second, we show how KL exponents are employed in analyzing an existing first-order method, the sequential convex programming method with monotone line search (SCPls) in  for difference-of-convex (DC) optimization problem with multiple smooth inequality constraints. By imposing suitable KL assumptions, we analyze the convergence rate of the sequence generated by SCPls in both nonconvex and convex settings. We also discuss how the various conditions required in our analysis can be verified for minimizing l1-2  subject to residual error measured by l2 norm/Lorentzian norm . To further illustrate the applications of KL exponents, finally, we focus on the minimization of the quotient of l1 and l2 (denoted as l1/l2) subject to one possibly nonsmooth constraint . We show that the sum of l1/l2 and the indicator function of an affine constraint set satisfies the KL property with exponent 1/2; this allows us to establish linear convergence of the algorithm proposed in [116, Eq. 11] under mild assumptions. We next extend the l1/l2 model to handle compressed sensing problems with noise. We establish the solution existence for some of these models under the spherical section property [114,128], and extend the algorithm in [116, Eq. 11] for solving these problems. We prove the subsequential convergence of our algorithm under mild conditions, and establish global convergence of the whole sequence generated by our algorithm by imposing additional KL and differentiability assumptions on a specially constructed potential function. Finally, we perform numerical experiments on robust compressed sensing and basis pursuit denoising with residual error measured by l2 norm or Lorentzian norm via solving the corresponding l1/l2 models by our algorithm. Our numerical simulations show that our algorithm is able to recover the original sparse vectors with reasonable accuracy.
|Rights:||All rights reserved|
As a bona fide Library user, I declare that:
- I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
- I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
- I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.
By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.
Please use this identifier to cite or link to this item: