|Sparse optimization with a cardinality objective and stochastic complementarity constraints
|Chen, Xiaojun (AMA)
Sun, Defeng (AMA)
Hong Kong Polytechnic University -- Dissertations
|Department of Applied Mathematics
|xviii, 100 pages : color illustrations
|Sparse optimization with complementarity constraints is a very interesting and popular topic which has numerous applications in engineering and economics. In the thesis, we consider sparse optimization with a cardinality objective and stochastic complementarity constraints. We utilize a piecewise linear and difference-of-convex (DC) function to approximate the cardinality objective function. We investigate theoretical properties of the sparse optimization and its continuous approximation regarding global and local solutions and optimality conditions. Moreover, we develop relevant algorithms with convergence and error analysis. The main results of this thesis are given in Chapters 3 and 4.
In Chapter 3, we aim to compute lifted stationary points of a sparse optimization problem (P0) with complementarity constraints and a cardinality objective function. We define a continuous relaxation problem (Rν) that has the same global minimizers and optimal value with problem (P0). Problem (Rν) is a mathematical program with complementarity constraints (MPCC) and a DC objective function. We define MPCC lifted-stationarity of (Rν) and show that it is weaker than directional stationarity, but stronger than Clarke stationarity for local optimality. Moreover, we propose an approximation method to solve (Rν) and an augmented Lagrangian method to solve its subproblem (Rν,σ), which relaxes the equality constraint in (Rν) with a tolerance σ. We prove the convergence of our algorithm to an MPCC lifted-stationary point of problem (Rν).
In Chapter 4, we consider a class of optimization problems with a DC objective function and stochastic complementarity constraints. We propose penalty counterpart (P e) of the DC optimization problem, which is a convexly constrained optimization problem with a DC objective function, and prove the exactness of penalty. The sample average approximation (SAA) of problem (P e) is defined as problem (P Ne). Then, we prove that any sequence of global minimizers of problem (P Ne) converges to a global minimizer of problem (P e) with probability 1 under Slater condition as the sample size N tends to infinity. Next, an error bound of optimal values between problems (P Ne) and (P e) is established. Finally, we develop an algorithm for solving problem (P e), which uses the SAA exact penalty technique and DC function structure, and prove the convergence.
Numerical experiments are presented in Chapters 3 and 4 to illustrate our theoretical results and algorithms. In numerical experiments, randomly generated problems are established to present the efficiency of our methods. Moreover, we utilize our methods to solve some problems from real applications. The results show that our methods are very efficient from different aspects and more competitive compared with some existing approaches. Finally, conclusions and future work are given at the end of this thesis and some interesting findings are presented in Appendix.
|All rights reserved
As a bona fide Library user, I declare that:
- I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
- I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
- I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.
By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.
Please use this identifier to cite or link to this item: