- Home
- Browsing by Department

Now showing items 1-20 of 98

Next Page-
Author Xie, Cong

Title Adaptive and parallel variational multiscale method for the Navier-Stokes equations

Department Dept. of Applied Mathematics Year 2015 [more][less]Subject Navier-Stokes equations.

Hong Kong Polytechnic University -- DissertationsDegree Ph.D. Pages xxii, 99 pages : color illustrations Language English InnoPac Record http://library.polyu.edu.hk/record=b2837219 URI: http://theses.lib.polyu.edu.hk/handle/200/8404 Abstract Navier-Stokes equations are basic equations in fluid dynamic. The problem is important both in practice and theory. As it is difficult to find their accuracy solutions, numerical simulations and experimentations have become important approaches to solve the problem.Variational multiscale finite element method is one of most useful methods. In order to guarantee the effectiveness, adaptive algorithm has been developed,which makes use of the solutions in the progress to automatically control the computing progress. In this thesis we first present an adaptive variational multiscale method for the Stokes equations. Then we develop two kinds of variational multiscale method based on the partition of unity for the Navier-Stokes equations. First, we propose some a posterior error indicators for the variational multiscale method for the Stokes equations and prove the equivalence between the indicators and the error of the finite element discretization. Some numerical experiments are presented to show their efficiency on constructing adaptive meshes and controlling the error. Secondly, a parallel variational multiscale method based on the partition of unity is proposed for incompressible flows. Based on two-grid method, this algorithm localizes the global residual problem of variational multiscale method into a series of local linearized residual problems. To decrease the undesirable effect of the artificial homogeneous Dirichlet boundary condition of local sub-problems, an oversampling technique is also introduced. The globally continuous finite element solutions are constructed by assembling all local solutions together using the partition of unity functions. Especially, we add an artificial stabilization term in the local and parallel procedure by considering the residual as a subgrid value, which keeps the sub-problems stable. We present the theoretical analysis of the method and numerical simulations demonstrate the high efficiency and flexibility of the new algorithm. Another a partition of unity parallel variational multiscale method is proposed. The main difference lies in that in this algorithm we propose two kinds of refinement method.It is diffcult to obtain the theoretical result as the above method. However, the numerical simulations show that the error of this algorithm decays exponentially with respect to the oversampling parameter. -
Author Xu, Yi

Title Algorithms and applications of semidefinite space tensor conic convex program

Department Dept. of Applied Mathematics Year 2013 [more][less]Subject Imaging systems -- Mathematics.

Hong Kong Polytechnic University -- DissertationsDegree Ph.D. Pages xx, 83 p. : ill. ; 30 cm. Language English InnoPac Record http://library.polyu.edu.hk/record=b2653068 URI: http://theses.lib.polyu.edu.hk/handle/200/7267 Abstract This thesis focuses on studying the algorithms and applications of positive semi-definite space tensors. A positive semi-definite space tensors are a special type semi-definite tensors with dimension 3. Positive semi-definite space tensors have some applications in real life, such as the medical imaging. However, there isn't an algorithm with good performance to solve an optimization problem with the positive semi-definite space tensor constraint, and the structure of positive semi-definite space tensors is not well explored. In this thesis, firstly, we try to analysis the properties of positive semi-definite space tensors; Then, we construct practicable algorithms to solve an optimization problem with the positive semi-definite space tensor constraint; Finally we use positive semi-definite space tensors to solve some medical problems. The main contributions of this thesis are shown as follows. Firstly, we study the methods to verify the semi-definiteness of space tensors and the properties of H-eigenvalue of tensors. As a basic property of space tensors, the positive semi-definiteness show significant importance in theory. However, there is not a good method to verify the positive semi-definiteness of space tensors. Based upon the nonnegative polynomial theory, we present two methods to verify whether a space tensor positive semi-definite or not. Furthermore, we study the smallest H-eigenvalue of tensors by the relationship between the smallest H-eigenvalue of tensors and their positive semi-definiteness.

Secondly, we consider the positive semi-definite space tensor cone constrained convex program, its structure and algorithms. We study defining functions, defining sequences and polyhedral outer approximations for this positive semi-definite space tensor cone, give an error bound for the polyhedral outer approximation approach, and thus establish convergence of three polyhedral outer approximation algorithms for solving this problem. We then study some other approaches for solving this structured convex program. These include the conic linear programming approach, the nonsmooth convex program approach and the bi-level program approach. Some numerical examples are presented. Thirdly, we apply positive semi-definite tensors into medical brain imagining. Because of the well-known limitations of diffusion tensor imaging (DTI) in regions of low anisotropy and multiple fiber crossing, high angular resolution diffusion imaging (HARDI) and Q-Ball Imaging (QBI) are used to estimate the probability density function (PDF) of the average spin displacement of water molecules. In particular, QBI is used to obtain the diffusion orientation distribution function (ODF) of these multiple fiber crossing. The ODF, as a probability distribution function, should be nonnegative. We propose a novel technique to guarantee nonnegative ODF by minimizing a positive semi-definite space tensor convex optimization problem. Based upon convex analysis and optimization theory, we derive its optimality conditions. And then we propose a gradient descent algorithm for solving this problem. We also present formulas for determining the principal directions (maxima) of the ODF. Numerical examples on synthetic data as well as MRI data are displayed to demonstrate our approach. -
Author Zhang, Kai

Title American option pricing and penalty methods

Department Dept. of Applied Mathematics Year 2006 [more][less]Subject Hong Kong Polytechnic University -- Dissertations

Options (Finance) -- Mathematical models

Options (Finance) -- Prices -- Mathematical modelsDegree Ph.D. Pages xiv, 135 leaves : ill. ; 30 cm Language English InnoPac Record http://library.polyu.edu.hk/record=b2069713 URI: http://theses.lib.polyu.edu.hk/handle/200/2748 Abstract The main purpose of this thesis is to study penalty approaches to American option pricing problems. We consider penalty approaches to pricing plain American options, American options with jump diffusion processes and two-asset American options. Convergence properties of these methods are investigated. Also, the numerical schemes - finite element method and fitted finite volume method - for solving the penalized PDE are developed. Finally, an augmented Lagrangian method is applied to solving the plain American option pricing. Empirical tests are carried out to illustrate the effectiveness and usefulness of our methods. For plain American option pricing, based on the theory of variational inequalities, a monotonic penalty approach is developed and its convergence properties are established in some appropriate infinite dimensional spaces. We derive the convergence rate of the combination of two power penalty functions. This convergence rate gives a unified result on that of higher and lower order penalty functions. After that, a fitted finite volume method is applied to finding the numerical solution of the penalized nonlinear PDE. We then test this method empirically, and compare it with projected successive over relaxation method (PSOR for short). We conclude that the monotonic penalty method is roughly comparable with the PSOR method, but is more desirable for its robustness under changes in market parameters, and furthermore the effect of the time reserving of the monotonic penalty method becomes significantly enhanced as the number of space steps increases. Pricing American options with jump diffusion processes can be formulated as a partial integro-differential complementarity problem. We propose a power penalty approach for solving this complementarity problem. The convergence analysis of this method is established in some appropriate infinite dimensional spaces. Then, using the finite element method, we propose a numerical scheme to solve the penalized problem and carry out the numerical tests to illustrate the efficiency of our method. The two-asset American option pricing problem is formulated as a continuous complementarity problem involving a two dimensional Black-Scholes operator. By using a power penalty method, the two-asset American option model is reformulated as a two dimensional nonlinear parabolic PDE. By introducing a weighted Sobolev space and the corresponding norm, the coerciveness and continuity of the bilinear operator in the variational problem are derived. Hence, the unique solvability of the original and penalized problems is established. The convergence rate of the power penalty method is obtained in some appropriate infinite dimensional spaces. Moreover, to overcome the computational difficulty of the convection-dominated Black-Scholes operator, a novel fitted finite volume method is proposed to solve the penalized nonlinear two dimensional PDE. We perform numerical tests empirically to illustrate the efficiency of our new method. Finally, based on the fitted finite volume discretization, an algorithm is developed by applying an augmented Lagrangian method (ALM for short) to pricing the plain American option. Convergence properties of ALM are considered. By empirical numerical experiments, we conclude that ALM is more effective than penalty method and Lagrangian method, and comparable with the PSOR method. Furthermore, numerical results show that ALM is more robust in terms of computation time under the changes in market parameters: interest rate and volatility. -
Author Bai, Yu

Title Analysis of dual-listed companies in mainland and Hong Kong

Department Dept. of Applied Mathematics Year 2015 [more][less]Subject Stocks -- Prices -- China -- Shanghai

Stocks -- Prices -- China -- Hong Kong

Stock exchanges -- China -- Shanghai

Stock exchanges -- China -- Hong Kong

Hong Kong Polytechnic University -- DissertationsDegree M.Phil. Pages xviii, 94 pages : illustrations (some color) Language English InnoPac Record http://library.polyu.edu.hk/record=b2826156 URI: http://theses.lib.polyu.edu.hk/handle/200/8243 Abstract This thesis attempts to research price differences in Chinese segmented stock markets. Companies in China can be listed as A-share trading in Shanghai (SH) and H-share trading in Hong Kong (HK). However, the large and persistent price gaps between A-and H-share prices have been observed, raising concerns on market segmentation within China and its implications on the efficiency of price discovery. In the thesis, the factors are studied, which can be used in understanding the dissimilarity between the two markets. The topics are considered to include the use of statistical tests to perform factor analysis, and dynamic time-warping algorithm to conduct a technical analysis about the pattern of both markets.Topic 1, the some analyses cover the main economic and company-specific factors that influence the AH Premium Index, including the fundamental, technical, and market microstructure factors. Data is collected from 50 companies listing in both markets from April 2011 to June 2014, which are grouped into three clusters by the k-means clustering technique. An appropriate factor model is built for each cluster, and statistical tools are applied to test the model. The results demonstrate that different factors can be contributed to explain the price gaps in different clusters.It is noted that large price gaps are mostly related to the shares of small market capitalization. Since the small supply of A-share and information asymmetry are disadvantages for international investors, the trading strategy in relation to small cap stocks are more likely to succeed.On the contrary,the small price gaps account for the large market capitalization. The price gaps have been narrowing in recent years, which may pave the way to convergence after issuing the Shanghai-Hong Kong Stock Connect. Relative price convergence, but not absolute price convergence, is likely to occur Obizhaeva and Wang(2013).Topic 2 is based on the cluster results of topic 1.In line with the results of variance test,the price differences between A-and H-share markets are significantly obvious.In other words,the price differentials in the high and low premium clusters are larger than that in the non-premium cluster. The thesis applies dynamic time-warping algorithm to fit the patterns of two stocks for the same company, using the nine examples to explain this economical phenomenon. The results present that the market factors obviously have impact on the premium cluster, and the non-premium group is influenced by some fundamental factors. -
Author Lai, Chung-hei

Title Application of tabu search algorithm for vehicle routing problem

Department Multi-disciplinary Studies

Dept. of Applied MathematicsYear 2000 [more][less]Subject Transportation, Automotive

Operations research

Hong Kong Polytechnic University -- DissertationsDegree M.Sc. Pages iii, 37, 72 leaaves : ill. ; 30 cm Language English InnoPac Record http://library.polyu.edu.hk/record=b1540663 URI: http://theses.lib.polyu.edu.hk/handle/200/5233 Abstract This paper deals with the design of a heuristic algorithm to solve the vehicle routing problem (VRP in short) for Sheung Shui Slaughtering House (SSSH in short). The VRP was studied whether we could allocate minimum number of lorries to deliver meat into meat market in a reasonable time. The VRP was formulated. Several papers on solving VRP were studied. One of them was selected and implemented as a computer program. The program was bench-marked by executing with some famous problem data. Then, the real life data was used to execute the program. Results were recorded and compared with the existing practice. Finally, the conclusion could be drawn that VRP could be solved by Tabu Search. -
Author Mok, Man-ho

Title Applying simulated annealing for the maximal covering location problem with an enhanced algorithm

Department Multi-disciplinary Studies

Dept. of Applied MathematicsYear 1999 [more][less]Subject Land use -- Planning -- Mathematical models

Simulated annealing (Mathematics)

Combinatorial optimization

Hong Kong Polytechnic University -- DissertationsDegree M.Sc. Pages ix, 92, [10] leaves : ill. ; 30 cm Language English InnoPac Record http://library.polyu.edu.hk/record=b1477342 URI: http://theses.lib.polyu.edu.hk/handle/200/2185 Abstract Being a major class of Location problems, the Maximal Covering Location (MCL) Problem, first proposed by Church and ReVelle, involves the location of a fixed number of facilities in such a fashion that acceptable coverage within some distance standard is provided to the largest amount of demand possible. The MCL problem has been applied to the location of a wide range of emergency facilities, especially fire stations. Over the past couple of decades many algorithms have been developed for solving the MCL problem, including the Greedy Addition, Greedy Addition with Substitution, Linear Programming methods with Lagrangian Relaxation, Branch-and-Bound, and etc. Until 1996, Murray and Church developed an entirely different type of heuristic employing a stochastic approach to searching, known as Simulated Annealing (SA). In this dissertation, I develop an enhanced SA algorithm for solving the MCL problem. The algorithm is then implemented in C language and implemented on a microcomputer. Afterwards a series of computational experiments has been carried out on two test data with problem size of 55 and 88 respectively. It is found that the algorithm can produce desirable results, with an average solution deviate from the optimal by less than 3.6%. In terms of Hit Ratio and Mean Program Time, the performance of this enhanced algorithm is also proven satisfactory. Through the process of literature review, I have also studied many models for siting fire stations, all based on the MCL problem. For the sake of completeness, all of these Deterministic and Probabilistic models are stated in the concluding chapter for reference purpose. -
Author Zhou, Yuying

Title The augmented Lagrangian methods and applications

Department Dept. of Applied Mathematics Year 2005 [more][less]Subject Hong Kong Polytechnic University -- Dissertations.

Lagrangian functions.

Mathematical optimization.Degree Ph.D. Pages vi, 146 leaves ; 30 cm. Language English InnoPac Record http://library.polyu.edu.hk/record=b1835408 URI: http://theses.lib.polyu.edu.hk/handle/200/2063 Abstract The purpose of this thesis is to study a general augmented Lagrangian scheme for optimization and optimal control problems. We establish zero duality gap and exact penalty properties between a primal optimization problem and its augmented Lagrangian dual problem, and characterize local and global solutions for a class of non-Lipschitz penalty problems. We also obtain the existence of an optimal control for an optimal control problem governed by a variational inequality with monotone type mappings, and establish zero duality gap between this optimal control problem and its nonlinear Lagrangian dual problem. Under the assumptions that the augmenting function satisfies the level-coercive condition and the perturbation function satisfies a growth condition, a necessary and sufficient condition for a vector to support an exact penalty representation of the problem of minimizing an extended real function is established. Moreover, in general Banach spaces, under the assumption that the augmenting function satisfies a valley at zero condition and the perturbation function satisfies a growth condition, a necessary and sufficient condition for a zero duality gap property between the primal problem and its augmented Lagrangian dual problem is established. We show that under some conditions the inequality and equality constrained optimization problem and its augmented Lagrangian problem both have optimal solutions. On the other hand it is shown that every weak limit point of a sequence of optimal solutions generated by its augmented Lagrangian problem is a solution of the original constrained optimization problem. Sufficient conditions for the existence of an exact penalization representation and an asymptotically minimizing sequence for a constrained optimization problem are established. It is shown that the second order sufficient condition implies a strict local minimum of a class of non-Lipschitz penalty problems with any positive penalty parameter. The generalized representation condition and the second order sufficient condition imply a global minimum of these penalty problems. We apply our results to quadratic programming and linear fractional programming problems. We study an optimal control problem where the state system is defined by a variational inequality problem with monotone type mappings. We first study a variational inequality problem for monotone type mappings. Under some general coercive assumption, we establish existence results of a solution of variational inequality problems with generalized pseudomonotone mappings, generalized pseudo-monotone perturbation and T-pseudomonotone perturbation of maximal monotone mappings respectively. We obtain several existence results of an optimal control of the optimal control problem governed by a variational inequality with monotone type mappings. Moreover, as an application, we get several existence results of an optimal control for the optimal control problem where the system is defined by a quasilinear elliptic variational inequality problem with an obstacle. By using nonlinear Lagrangian methods, we obtain one necessary condition and several sufficient conditions for the zero duality gap property between the optimal control problem where the state of the system is defined by a variational inequality problem for monotone type mappings and its nonlinear Lagrangian dual problem. We also apply our results to an example where the variational inequality problem leads to a linear elliptic obstacle problem. The study of this thesis has used tools from nonlinear functional analysis, non-linear programming, nonsmooth analysis and numerical linear algebra. -
Author Hui, Yuk-hung

Title Comparative analysis of clinical trials using long-term survivor models

Department Multi-disciplinary Studies

Dept. of Applied MathematicsYear 2001 [more][less]Subject Clinical trials

Survival analysis (Biometry)

Hong Kong Polytechnic University -- DissertationsDegree M.Sc. Pages xii, 154 leaves : ill. ; 30 cm Language English InnoPac Record http://library.polyu.edu.hk/record=b1569108 URI: http://theses.lib.polyu.edu.hk/handle/200/3267 Abstract This study ascertains the benefits of using long-term survivor models (mixture models) in analysis of clinical trials. Survival times of 195 patients with pharynx cancer are analysed with mixture models advocated by Mailer and Zhou (1996). Statistical test using both non-parametric and parametric methods suggest immunes is present in the sample (at 5% level of significance) and mixture models should apply. Weibull mixture model provides an excellent description of patients' survival pattern. The mean absolute percentage errors of fitting using Weibull mixture model is 8.34% while it is 17.97% for ordinary Weibull model. This represents more than 100% improvement in goodness of fit. Other measurements of goodness of fit such as log-likelihood and correlation coefficient for censored data yield consistent result. The mixture model predicts that in long run 20.9% of the patients are free from pharynx cancer after treatments. In the trials, 100 patients are treated with standard radiation treatment (radiotherapy) alone while the other 95 patients are treated with an additional chemotherapeutic agent (neo-adjuvant chemotherapy). Comparison of treatment effects using traditional approaches did not detect significant difference between the two regimens (at 5% level of significance). Parametric test based on Weibull mixture model, however, reveals significant difference between them. Patients short-tern survival is comprised by the neo-adjuvant and their median survival time is 397 days (i.e. 13 months) compared with 553 days (i.e. 18 months) for those treated with radiation alone. However, patients receiving the neo-adjuvant have a better long-term prospective as higher proportion is cured (23.2%). For patients treated with radiation alone only 17.2% is cured. -
Author Lau, Suk-ting

Title Conditional heteroscedastic autoregressive moving average models with seasonal patterns

Department Dept. of Applied Mathematics Year 1999 [more][less]Subject Autoregression (Statistics)

Box-Jenkins forecasting

Economic forecasting

Time-series analysis

Finance -- Mathematical models

Hong Kong Polytechnic University -- DissertationsDegree M.Phil. Pages vii, 94 leaves : ill. (some col.); 30 cm Language English InnoPac Record http://library.polyu.edu.hk/record=b1477333 URI: http://theses.lib.polyu.edu.hk/handle/200/5002 Abstract The earlier research in time series mainly concentrated on models that assume a constant one-period forecast variance. In reality, however, the assumption may not be met in all cases, especially in economics and finance. Therefore, much recent work has been directed towards the relaxation of the constant conditional variance assumption, namely allowing the conditional variance to change over time and keeping the unconditional variance constant. Tsay (1987) proposed the conditional heteroscedastic autoregressive moving average (CHARMA) model. One of the advantages of the model is that it includes the autoregressive conditional heteroscedastic (ARCH) model and the random coefficient autoregressive (RCA) models as its special cases. Both models characterize time series with varying conditional variance in different representations. Therefore, the CHARMA model is more flexible and is able to model data from a wider perspective. It is also believed that seasonal pattern can be an important phenomenon in the conditional variance and so the purpose of this research is to study seasonal conditional heteroscedasticity and extend the CHARMA model to the seasonal CHARMA model. One of the advantages of our approach is that the relevant time series can be modeled in a parsimonious parameterization. The invertibility and stationarity conditions for the model are derived. We study all the procedures for building up the model. These include the test for varying conditional variance, estimation of the model parameters by the least squares, and the maximum likelihood method and diagnostic checking methodology for testing the adequacy of the fitted model. Two empirical examples are discussed in detail: the exchange rate of US dollar/Japanese Yen and the money supply (M1) of United States. In addition, the ability of capturing volatility will be compared among the proposed model and the GARCH family since the GARCH family is widely used in modeling conditional heteroscedasticity. It is found that the exchange rate money supply have a clear seasonal volatility. The proposed model can capture this effect and produce good forecasts. -
Author Dong, Zhiyuan

Title Continuous-mode single-photon states : characterization, pulse-shaping and filtering

Department Dept. of Applied Mathematics Year 2017 [more][less]Subject Quantum optics.

Photons.

Hong Kong Polytechnic University -- DissertationsDegree Ph.D. Pages xxii, 85 pages : color illustrations Language English InnoPac Record http://library.polyu.edu.hk/record=b2950027 URI: http://theses.lib.polyu.edu.hk/handle/200/8861 Abstract This thesis is devoted to studying the statistical properties and quantum filtering of continuous-mode single-photon Fock states. Four topics are under consideration: 1.Wigner spectrum of continuous-mode single-photon Fock states. 2.Coherent feedback control of continuous-mode single-photon Fock states. 3.Quantum filtering for multiple measurements of quantum systems driven by fields in continuous-mode single-photon Fock states. 4.Quantum filtering for multiple measurements of quantum systems driven by two continuous-mode single-photon Fock states. For the first topic, we propose to use Wigner spectrum to analyze continuous-mode single-photon Fock states. Normal ordering (Wick order) is commonly used in the analysis of quantum correlations. Unfortunately, it can only give partial information for correlation analysis. For example, for a continuous-mode single-photon Fock state (whose correlation function consists of two parts, one due to quantum vacuum noise and the other due to photon pulse shape), the normal ordering analysis simply ignores the contribution from the quantum vacuum noise. In this topic, we show Wigner spectrum is able to provide complete quantum correlation in time and frequency domains simultaneously. We demonstrate the effectiveness of the method by means of two examples, namely, optical cavity (a passive system) and degenerate parametric amplifier (DPA, a non-passive system). Numerical simulations show that Wigner spectra are able to reveal the clear difference between the output states of these two systems driven by the same single-photon state.

For the second topic, we show how various control methods can be used to manipulate the pulse shapes of continuous-mode single-photon Fock states. More specifically, we illustrate that two control methods, direct coupling and coherent feedback control, can be used for pulse-shaping of continuous-mode single-photon Fock states. The effect of control techniques on pulse-shaping is visualized by the Wigner spectrum of the output single-photon states. It can be easily seen that the linear quantum feedback network has much more influence on the detection probability of a single-photon than the directly coupled system. In addition, for a simple quantum feedback network, the changes of the output Wigner spectrum with respect to beamsplitter parameter also have been analyzed. For the third topic, we extend the existing single-photon filtering framework by taking into account imperfect measurements. The master equations and stochastic master equations for quantum systems driven by a single-photon input state are given explicitly. More specifically, we study the case when the output light field is contaminated by a vacuum noise. We show how to design filters based on multiple measurements to achieve desired estimation performance. Two scenarios are studied: 1) homodyne plus homodyne detection, and 2) homodyne plus photon-counting detection. A numerical study of a two-level system driven by a single-photon state demonstrates the advantage of filtering design based on multiple measurement when the output filed is contaminated by quantum vacuum noise. For the fourth topic, the problem of quantum filtering with two homodyne detection measurements for a two-level system is considered. The quantum system is driven by two input light field channels, each of which contains a single photon. A quantum filter based on multiple measurements is designed; both the master equations and stochastic master equations are derived. In addition, numerical simulations for master equations with various pulse shape parameters are compared. It seems that the maximum of excitation probability can be achieved when the two photons have the same peak arrival time and the same ratio of bandwidth to the decay rate of the two-level system. -
Author Yeung, Hon Keung

Title Coordinated inventory-transportation supply chain models

Department Dept. of Applied Mathematics Year 2012 [more][less]Subject Business logistics.

Hong Kong Polytechnic University -- DissertationsDegree M.Phil. Pages xxiv, 210 leaves : ill. ; 30 cm. Language English InnoPac Record http://library.polyu.edu.hk/record=b2551296 URI: http://theses.lib.polyu.edu.hk/handle/200/6805 Abstract Supply chain, which is a flow of materials, information and funds between different parties, is one of the important issues in today's business and industrial sectors. In a supply chain, a vendor is required to produce items to satisfy the needs of buyers. If vendor and buyers operate independently to minimize their own costs, it may not be optimal to the system as a whole. Most of the literature has found that a supply chain can achieve better system cost performance through coordination of vendor and buyers, hence effective coordination plays an important role in the successful operation of supply chains. Chan and Kingsman (2005, 2007) proposed a synchronized cycles model for the coordination of a single-vendor multi-buyer supply chain in which vendor and buyers synchronize the production and ordering cycles so as to minimize the total system cost. The synchronized cycles model performs better than independent optimization as well as common order cycle model developed by Banerjee and Burton (1994) in terms of total system cost. Furthermore, the synchronized cycles model addresses some of the shortcomings of previous coordination models. For example, the model considers vendor as a manufacturer producing an item to supply multiple heterogeneous buyers and tackles the discrete vendor inventory depletion into the model. This issue was rarely addressed in the literature (see Sarmah et. al.(2006)). In the synchronized cycles model, the process of finding the optimal solution involves the determination of production cycle NT of vendor, ordering cycle kiT and ordering time ti of buyers where ki are integer factors of N. Due to the complexity of the model, it is very difficult to find the optimal solution analytically. Chan and Kingsman (2007) proposed a heuristic algorithm to find a "near-optimal" solution. The algorithm has been found to be competitive when compared with genetic algorithm. However, it is believed that there are still rooms for improvement in the algorithm in terms of the "optimal" solution and computational time.

Transportation is also a key component in a supply chain. Most of the literature of single-vendor multi-buyer coordination usually assumed that transportation cost is a constant (i.e $/order) for simplicity. Truck capacity and truck transportation cost were not considered. Different transportation modes such as less-than-truckload (LTL) and full-truckload (FTL) have been studied, but are limited to single-vendor single-buyer supply chain. It is rarely mentioned that transportation mode with truck capacity and truck cost are applied to a coordinated single-vendor multi-buyer supply chain. Finally, environmental problem is a key issue nowadays as people become more concerned about environmental performance. However, existing supply chain models which put stress on financial performance did not pay much attention to the environment. For instance, more frequent deliveries can reduce average inventory level in a supply chain but cause more air pollution during transportation. Also, holding too many stocks consume more materials and resources. Hence, raw materials wastage and energy wastage should be taken into consideration in supply chain models. It is worth addressing and incorporating these environmental measures into a coordinated supply chain system. -
Author Lee, Yu-chung Eugene

Title Co-ordinated supply chain management and optimal control problems

Department Dept. of Applied Mathematics Year 2007 [more][less]Subject Hong Kong Polytechnic University -- Dissertations.

Business logistics.Degree Ph.D. Pages xxi, 256 leaves : ill. ; 30 cm. Language English InnoPac Record http://library.polyu.edu.hk/record=b2165711 URI: http://theses.lib.polyu.edu.hk/handle/200/1054 Abstract The thesis is divided into two parts: Supply Chain Co-ordinated Model and Optimal Control Problems. The first part of the thesis examines co-ordinated supply chain model. From classical inventory theory, the economic ordering quantity (EOQ) concept has been widely applied. Under EOQ, a buyer determines the optimal ordering size that minimizes its total cost. In a two-level supply chain, under individual optimal policies, the buyer orders at the EOQ and the vendor determines its own optimal production lot size, i.e. economic production quantity (EPQ). However, independent optimization may not be optimal for the whole supply chain system. Such an independent policy is known as the non-cooperation system. Many researchers, beginning in the 1970's, started to explore modes of co-ordination that performs better than the non-cooperation system in terms of total system cost. Motivated by a recently developed co-ordination model, the synchronized cycles model, this part of the thesis further explores some characteristics on this model that enhances the co-ordination between the buyers and the vendor. A hybrid heuristic is developed to solve the synchronized cycles supply chain problem. By numerous numerical examples, the hybrid heuristic is successful in searching for a better "near-optimal" solution than the synchronized cycles algorithm developed by Chan and Kingsman (2005, 2007). Further investigations are carried out to explore the characteristics of the model and some bounds are developed on certain decision variables in order to enhance the synchronized cycles algorithm. In addition, modification to the synchronized cycles model by including vehicle scheduling is considered. The performance of the synchronized cycles model is compared to the independent policy. By the "synchronization" characteristic, the synchronized cycles model out-performs the independent policy both in shipment and delivery scheduling and total system cost. Finally, while many of the past researches emphasized on the co-ordination that minimizes the total system cost in the supply chain system, the thesis investigates how the demand heterogeneity, e.g. different values of mean demand, variance and skewness, would affect the performance of the synchronized cycles model. This is a novel investigation in the field of supply chain management. Numerical experiments are carried out to identify conditions of the demand heterogeneity that work well for the synchronized cycles model. The second part of the thesis consists of four open optimal control problems applying to different areas. Optimal control techniques are developed in this thesis and applied to solve the mathematical and computational difficulties encountered by these problems. The optimal control software package, MISER3, is intensively used. Numerical examples are provided to demonstrate the effectiveness of the methods developed. Results obtained are significant. 1. Modeling of a supply chain system with Ornstein Uhlenbeck demand process This demand process is seldom considered in the field of supply chain management and has not yet been analytically derived using the Pontryagin's maximum principle. A single-vendor-single-buyer supply chain system is formulated. Both co-ordinated and non-cooperative supply chain models are considered. 2. The isoperimetric pillar-construction problem The problem is to find an enclosed cross-sectional/base region of a pillar defined by a simple closed curve of fixed perimeter, such that the volume of the pillar, bounded above by a given surface, is maximized. Solutions to the single pillar and multiple pillars cases are considered. For multiple pillars case, a novel elliptic separation constraint technique is used to separate the overlapping of different pillars. 3. Modeling of the design of a flexible rotating beam The problem considers a rotating beam which carries an end mass and rotates in a vertical plane under the effect of gravity by means of a time-varying driving torque. The problem is posed as a continuous-time optimal control problem for the ACLD treatment. Such a computational optimal control approach is a novel technique in the design of the ACLD treated rotating beam. In addition, the accurate time of the switching points are determined which has not been considered in previous similar researches. 4. Nonlinear model of quarter-car suspension problem Vehicle active suspension system has been a popular issue in road vehicle applications. Many researchers have dedicated effort in the modeling and the design of controlled suspension system to ensure a smooth ride. A quarter-car suspension model with state dependent ODE system of equations is considered. Computational method with enhanced switching controls is used to solve the problem. -
Author Jiang, Yuan

Title Co-ordination models of a single-vendor multi-buyer supply chain

Department Dept. of Applied Mathematics Year 2009 [more][less]Subject Hong Kong Polytechnic University -- Dissertations.

Business logistics -- Management -- Mathematical models.

Industrial procurement -- Management -- Mathematical models.Degree M.Phil. Pages xiv, 204 leaves : ill. ; 30 cm. Language English InnoPac Record http://library.polyu.edu.hk/record=b2306433 URI: http://theses.lib.polyu.edu.hk/handle/200/3883 Abstract Supply chain management has become a critical issue in current business environments. Much research has emphasized the co-ordination that reduces the total system cost in a supply chain network. In the last three decades, various integrated inventory co-ordinated models have been established (Sarmah et al. (2006), Khouja and Goyal (2008)). Chan and Kingsman (2005, 2006, 2007) developed a synchronized cycles model that allows each buyer to choose its ordering cycle, where the length of the cycle is a submultiple of the vendor's production cycle. In order to further minimize the total cost, under the synchronized cycle the vendor may schedule the time of the delivery within an ordering cycle, and this delivery time may be different from buyer to buyer. It has been shown, by many numerical experiments, that the synchronized cycles model can significantly reduce the total system cost and make a significant reduction in the vendor's cost compared to the independent policy and the common replenishment cycle (e.g. Banerjee and Burton (1994)). However, the cost to all the buyers is significantly increased. This research analyses what mechanisms are needed from the vendor to motivate the buyers to change their policies so as to allow the saving from coordination to be achieved. The first mechanism proposed by the research is quantity discounts. Three models of quantity discounts are proposed. The second mechanism proposed by this research is a trade credit policy, in which the supplier will offer the retailer a delay period, that is, the trade credit period, in paying for the amount of purchasing cost. Such credit policies may be applied as an alternative to quantity discounts to motivate buyers to participate in the supply chain co-ordination. The final mechanism proposed by this research is a cost sharing policy in which a portion of the buyer's holding cost is borne by the vendor. While the vendor benefits from the co-ordination by synchronized cycles, the mechanisms proposed by this research can guarantee that a buyer's total relevant cost of coordination will not be increased when compared with independent optimization. Hence, both the vendor and the buyers are motivated to co-ordiante in the supply chain. -
Author Wei, Yan

Title Co-ordination, warehousing, vehicle routing and deliverymen problems of supply chains

Department Dept. of Applied Mathematics Year 2016 [more][less]Subject Business logistics -- Mathematical models.

Transportation -- Decision making -- Mathematical models.

Hong Kong Polytechnic University -- DissertationsDegree Ph.D. Pages xvi, 241 pages : color illustrations Language English InnoPac Record http://library.polyu.edu.hk/record=b2925570 URI: http://theses.lib.polyu.edu.hk/handle/200/8741 Abstract Compared with traditional business management aiming to pursue single entity's own maximum benefit or minimum cost, supply chain management, which serves the market from a more systemic perspective, focuses its attention on achieving good responsiveness as well as economy performance through proper coordination of all the participants across the business functions of the supply chain, including procurement, manufacturing and distribution, etc. With the bloom of information evolution, more attentions have been given to increase the degree of coordination among multiple functions within the supply chain. Although, the more decision modules integrated in one co-ordination mathematical model, the better performance of the supply chain would be achieved, it is at the expense of computational time for searching the optimal solution due to the complexity of the model. Hence, our co-ordinated systems would focus on integrating two or more of the following decision levels: procurement, production, inventory, warehousing, vehicle routing and delivery men routing.

This thesis proposes and develops the mathematical models and solution methods for four supply chain coordinating systems. (i) Asynchronized cycles single-vendor multi-buyer supply chain model involving clustering of buyers with long and short cycles is proposed. The ordering cycle of each buyer is either an integer multiple or an integer factor of the vendor's production cycle. The buyer-clustering mechanism, in which the ordering cycles adopted by buyers are allowed to be larger than the vendor's production cycle, increases the flexibility of the system, which reduces the total system cost. The effectiveness of this clustering synchronized cycles model is also analyzed. (ii) An integrated production-warehouse location-inventory (PWLI) model is proposed. In this model, decision variables of warehouse location, production schedule and ordering frequency, are integrated in one model and determined simultaneously by minimizing the total system cost. Meanwhile, a synchronization mechanism is implemented to the system so as to coordinate inventory replenishment decisions. Numerical experiments have been carried out to illustrate the performance of this co-ordinated model. (iii) An extension of the synchronized cycles PWLI model is proposed. In this extended model, deliveries are modeled by a set of heterogeneous vehicle routing problems instead of a fixed cost for each order. Numerical experiments have been carried out to illustrate the performance of this co-ordinated model. (iv) An integrated model for multi-depot vehicle routing and delivery men problem is studied. This model incorporates a distribution network of multiple depots, multiple parking sites and multiple customers linked by trips of a fleet of homogeneous vehicles and a number of delivery men assigned to the vehicles. The objective of this model is to determine the number of delivery men assigned to each vehicle and the routing of vehicles and delivery men so as to minimize the total relevant costs involved in the two levels. -
Author Wong, Chi-kin

Title Discrete minimal surfaces

Department Dept. of Applied Mathematics Year 2000 [more][less]Subject Minimal surfaces

Hong Kong Polytechnic University -- DissertationsPages [59] leaves : ill. (some col.) ; 30 cm Language English InnoPac Record http://library.polyu.edu.hk/record=b1535406 URI: http://theses.lib.polyu.edu.hk/handle/200/1384 Abstract In this thesis, I propose a new numerical procedure to obtain discrete minimal surfaces with fixed or partially free boundaries. Using this procedure, I recover most of the minimal surfaces obtained by other mathematicians such as Hildebrandt, Karcher as presented in the monograph "Minimal Surfaces" by Dierkes et al (1992) [1]. The same procedure also gives rise to new graphics of partially free boundary minimal surfaces as depicted in the popular scientific account "The parsimonious universe : shape and form in the natural world" by Hildebrandt and Tromba (1996) [3]. The origin of the minimization algorithm comes from the paper of Pinkall and Polthier [4] published in the journal Experimental Mathematics in 1993. My contributions consists of : (i) improving the algorithm of Pinkall and Polthier so that one point at a time needs be minimized, as a consequence of which the computer code is greatly simplified. (ii) writing down the codes in the language of Mathematica and implementing it, (iii) providing the convergence of my algorithm for the fixed boundary case, (iv) using this algorithm to produce graphics of most of the famous minimal surfaces in the book on minimal surfaces by Dierkes et al [1] as well as some additional new minimal surfaces by pasting and refinement techniques starting from some simple fundamental pieces, and (v) writing down the Mathematica codes to handle the partially free boundary problem case. -
Author Zhang, Yanfang

Title Distributionally robust stochastic variational inequalities and applications

Department Dept. of Applied Mathematics Year 2012 [more][less]Subject Variational inequalities (Mathematics)

Hong Kong Polytechnic University -- DissertationsDegree Ph.D. Pages xii, 99 p. : ill. ; 30 cm. Language English InnoPac Record http://library.polyu.edu.hk/record=b2615868 URI: http://theses.lib.polyu.edu.hk/handle/200/6970 Abstract This thesis focuses on the stochastic variational inequality (VI). The stochastic VI has been used widely in engineering and economics as an effective mathematical model for a number of equilibrium problems involving uncertain data. For a class of stochastic VIs, we present a new residual function defined by the gap function in Chapter 2. The expected residual minimization (ERM) formulation is a nonsmooth optimization problem with linear constraints. We prove the Lipschitz continuity and semismoothness of the objective function and the existence of minimizers of the ERM formulation. We show various desirable properties of the here and now solution, which is a minimizer of the ERM formulation. In Chapter 3, we propose a globally convergent (a.s.) smoothing sample average approximation (SSAA) method for finding a minimizer of the ERM formulation. We show that the SSAA problems of the ERM formulation have minimizers in a compact set, and any cluster point of minimizers (stationary points) of the SSAA problems is a minimizer (a stationary point) of the ERM formulation (a.s.) as the sample size N →∞ and the smoothing parameter μ ↓ 0. We discuss the ERM formulation for the stochastic linear VI in Chapter 4, which is convex under some mild conditions. We apply the Moreau-Yosida regularization to present an equivalent smooth convex minimization problem. To have the convexity of the sample average approximation (SAA) problems of the ERM formulation, we adopt the Tikhonov regularization. We show that any cluster point of minimizers of the Tikhonov regularized SAA problems is a minimizer of the ERM formulation as the sample size N →∞ and the Tikhonov regularization parameter ε → 0. Moreover, we prove that the minimizer is the least l2-norm solution of the ERM formulation. We also prove the semismoothness of the gradients of the Moreau-Yosida and Tikhonov regularized SAA problems. In Chapter 5, we discuss the distributionally robust stochastic linear VI based on the ERM formulation. We introduce the CVaR formulation defined by the ERM formulation and establish the relationship between the CVaR formulation and the ERM formulation. For a wide range of cases, we show that the two formulations have the same minimizers. Moreover, we derive the gradient consistency for the smoothing CVaR formulation. We employ the sublinear expectation to consider the distributionally robust CVaR formulation for the stochastic linear VI, and prove the existence of minimizers of the robust CVaR formulation. We provide applications arising from traffic flow problems for stochastic VI in Chapter 6. We show the conditions and assumptions imposed in this thesis hold in such applications. Moreover, numerical results illustrate that the solutions, efficiently generated by the ERM formulation, have desirable properties. -
Author An, Congpei

Title Distribution of points on the sphere and spherical designs

Department Dept. of Applied Mathematics Year 2011 [more][less]Subject Sphere.

Spherical data.

Sphere -- Mathematical models.

Sphere -- Models.

Hong Kong Polytechnic University -- DissertationsDegree Ph.D. Pages xiv, 110 leaves : col. ill. ; 30 cm. Language English InnoPac Record http://library.polyu.edu.hk/record=b2462537 URI: http://theses.lib.polyu.edu.hk/handle/200/6292 Abstract This thesis concentrates on distribution of points on the unit sphere and polynomial approximation on the unit sphere by using spherical designs. The study of distribution of points on the sphere has many applications, including climate modeling and global approximation in geophysics and virus modeling in bioengineering, as the earth and cell are approximate spheres. For choosing the point set XN , if as in many applications the point set is given by empirical data, then the only option is to selectively delete points so as to improve the distribution. If, on the other hand, the points may be freely chosen, then we shall see that there is merit in choosing XN to be a "spherical t-design" for some appropriate value of t. A set of N points on the unit sphere is a spherical t-design if the average value of any polynomial of degree at most t over the set XN is equal to the average value of the polynomial over the sphere. The main contributions of this thesis consist of the following two parts. 1. We consider the characterization and computation of spherical t-designs on the unit sphere in 3 dimensional space when N ≥ (t + 1)², the dimension of the space of spherical polynomials of degree at most t. We show how construct well conditioned spherical t-designs with N ≥ (t + 1)² points by maximizing the determinant of a Gram matrix which satisfies undetermined nonlinear equations. Interval methods are then used to prove the existence of a true spherical t-design and to provide a guaranteed interval containing the true spherical t-design. The resulting spherical designs have good geometrical properties (separation and mesh norm). We discuss desirable properties of the points for both equal weight numerical integration and polynomial interpolation on the sphere, and give examples to illustrate the characterization of these points. 2. We consider polynomial approximation on the unit sphere by a class of regularized discrete least squares methods, with novel choices for the regularization operators and the point sets of the discretization. We allow different kinds of rotationally invariant regularization operators, including the zero operator (in which case the approximation includes interpolation, quasi-interpolation and hyperinterpolation); powers of the negative Laplace-Beltrami operator (which can be suitable when there are data errors); and regularization operators that yield filtered polynomial approximations. As node sets we use spherical t-designs. For t ≥ 2L and an approximation polynomial of degree L it turns out that there is no linear algebra problem to be solved, and the approximation in some cases recovers known polynomial approximation schemes, including interpolation, hyperinterpolation and filtered hyperinterpolation. For t ε [L, 2L), the linear system needs to be solved numerically. We present an upper bound for the condition number and show that well conditioned spherical t-designs provide good condition numbers. Finally, we give numerical examples to illustrate the theoretical results and show that well chosen regularization operators and well conditioned spherical t-designs can provide good polynomial approximation on the sphere, with exact data or contaminated data. -
Author Ma, Cheng

Title Exact penalty function methods and their applications in search engine advertising problems

Department Dept. of Applied Mathematics Year 2012 [more][less]Subject Hong Kong Polytechnic University -- Dissertations

Nonlinear programming.

Mathematical optimization.Degree Ph.D. Pages xv, 129 p. : ill. ; 30 cm. Language English InnoPac Record http://library.polyu.edu.hk/record=b2551281 URI: http://theses.lib.polyu.edu.hk/handle/200/6793 Abstract The penalty function method is one of the most fundamental and useful tools in the modern optimization and has developed into a major research field since 1950s. The study of penalty functions has proliferated in many interesting areas within mathematical optimization society. Nowadays, researchers in optimization fields still pursue unremittingly new breakthroughs in theoretical and algorithmic aspects of penalty function methods. However, it should be mentioned that the currently existing exact penalty functions have a disadvantage that the evaluation of the merit function either needs Jacobian (e.g., augumented Lagrangian penalty functions) or is no longer smooth (l1 or l∞ penalty functions). "It would be a major theoretic breakthrough in nonlinear programming if a simple continuously differentiable function could be exhibited with the property that any unconstrained minimum is a solution of the constrained problem."Evans, Gould and Tolle [21]. So far, to some extent, the breakthrough of the above quotation has been achieved. Recently, Huyer and Neumair in [38] proposed a new exact and smooth penalty function through adding an auxiliary variable ε to deal with equality constrained minimization problem. The proposed new penalty function enjoys several good properties: (1) good smoothness and exactness properties; (2) bounded below under reasonable conditions; (3) combination of regularization with penalty techniques, which are not possessed by the classical simple and exact penalty functions. Moreover, the new penalty function only involves the information of objective function and constraints function rather than the one of gradient and Hessian matrix. Nevertheless, the new exact penalty function is different from the definition of traditional penalty function, namely, the values of penalty terms are zero on the feasible set and positive outside the feasible set. In spite of significant differences between the new penalty function and the classical simple and exact penalty functions, naturally, a question arises: What's on earth the relationship between them? In this thesis, motivated by Huyer and Neumair's work [38], we extend the norm function term of the exact penalty function in [38] to a class of convex functions with a unified framework for some barrier-types and exterior-type penalty functions. We characterize necessary and sufficient conditions for the exact penalty property. Interestingly, we also explore the equivalence between this class of penalty functions and the traditional simple exact penalty functions in the sense of exactness property. These results clarify that this class of penalty functions not only have exactness property as the classical simple penalty function, but also possess the smoothness property, which is not shared by the latter. Furthermore, since the class of penalty functions are bounded below, a revised penalty function method is established. In addition, we verify that, under certain conditions, the proposed algorithm terminates at the optimal solution of the primal problem after finitely many iterations; while in the absence of these conditions, a perturbation theorem for this algorithm can be derived. As a corollary, the global convergence property is presented-namely, every accumulation point of the sequence generated by the algorithm is an optimal solution of the primal problem. The numerical outputs verify the correctness of our developed theory as desired.

We propose a new exact and smooth penalty function for semi-infinite programming problems. The main feature of our penalty function is that we only need to add one variable ε to handle infinitely many constraints. The merit function is considered as a function of x and ε simultaneously which has good smoothness and exactness properties, without involving gradient and Jacobian matrices. We derive another useful property that the minimizer (x*,ε*) of the penalty problem satisfies ε* = 0 if and only if x* solves semi-infinite programming problem. This property demonstrates that the introduced new variable ε can be viewed as an indicator variable of a local (global) minimizer of semi-infinite programming problem. Alternatively, under some mild conditions, the local exactness proof is shown. The numerical results demonstrate that it is an effective and promising approach for solving constrained semi-infinite programming problems. Similarly, we also apply a new exact penalty function to tackle the min-max programming problem and establish necessary and sufficient conditions for the exactness property. In addition, we characterize the second-order sufficient conditions for the local exactness property. We model and explore the search-based advertising auction as a large scale integer programming problem with more realistic situations, e.g., multiple slots, advertisers with choice behavior and the popular generalized second price mechanism etc.. And then, we apply the new penalty function to this proposed integer programming. In addition, we give numerical simulations to address managerial insights on both operational and theoretical aspects and compare the numerical performances with currently existing algorithms for search engine advertising problems. -
Author Tang, Wai Man

Title Financial time series modelling in frequency domain

Department Dept. of Applied Mathematics Year 2017 [more][less]Subject Time-series analysis.

Finance -- Mathematical models.

Finance -- Econometric models.

Hong Kong Polytechnic University -- DissertationsDegree M.Phil. Pages 139 pages : color illustrations Language English InnoPac Record http://library.polyu.edu.hk/record=b2961653 URI: http://theses.lib.polyu.edu.hk/handle/200/8916 Abstract In financial time series modelling, one problem is to identify a small number of potentially important factors and incorporate them into a multi-factor model in order to explain the variable in consideration. In this thesis, we propose a new factor search methodology in frequency domain, and select factors based on frequency peak patterns to obtain the final model. This ensures the key patterns in dependent variable be found and suitable factors be selected based on the peaks in common. It performs well even when the number of factors is greater than the sample size. In addition, the frequency domain provides flexibility in dealing with independent variables with different timeframes, and this could be valuable in finance and economic when traditional models usually can handle data in single sampling frequency only. Using the proposed method, we study three different types of applications. The first is to identify the constituents of an index or a mutual fund. We demonstrate that our method can identify most of the constituents based on the frequency fingerprints (key patterns) in the variables. The second is to develop multi-factor models based on macroeconomic factors for economic and financial indices. We show that it is important to include factors with different timeframes to achieve better fit. Finally, we study the influential technical analysis indicators that investors might be using in their trading decisions as reflected in the transacted volume, and compared the indicators selected for the same company traded in Hong Kong and Mainland stock exchange markets. -
Author Yang, Xinmin

Title Generalized convexity in optimization

Department Dept. of Applied Mathematics Year 2002 [more][less]Subject Hong Kong Polytechnic University -- Dissertations

Mathematical optimizationDegree Ph.D. Pages xiii, 166 leaves ; 30 cm Language English InnoPac Record http://library.polyu.edu.hk/record=b1615453 URI: http://theses.lib.polyu.edu.hk/handle/200/827 Abstract In this thesis, some new generalized convex functions and generalized monotone functions are introduced, and their properties and various relations are established. These new generalized convexities and generalized monotonicities are then applied to the study of optimality conditions and duality theory in optimization. The thesis consists of four parts. In part 1, real-valued generalized convexities such as the concepts of semistrictly preinvex functions and generalized preinvex functions are introduced and their properties are given. The preinvexity of a real-valued function is characterized by an intermediate-point preinvexity condition. Some properties of semistrictly preinvex functions and semipreinvex functions are discussed. In particular, the relationship between a semistrictly preinvex function and a preinvex function is investigated. It is shown that a function is semistrictly preinvex if and only if it satisfies a strict invexity inequality for any two points with distinct function values. We also prove that the ratio of two semipreinvex functions is semipreinvex. It is worth noting that these characterizations reveal various interesting relationships among prequasiinvex, semistrictly prequasiinvex, and strictly prequasiinvex functions. These relationships are useful in the study of optimization problems. Part 2 studies set-valued generalized convexities. Generalized subconvexlike and nearly subconvexlike functions are introduced. In particular, these two classes of generalized convexities are invariant under multiplication and division provided that the multiplier or the divisor is a positive function. We note here that no other generalized convex function in the family of convexlike functions and its generalizations possesses this prominent property. A potential application of the near subconvexlikeness or generalized subconvexlikeness is in fractional programming. Furthermore, theorems of the alternative are established. Applications are given to obtain Lagrangian multiplier theorems for set-valued optimization problems and scalarization theorems for a weakly efficient solution, Benson properly efficient solution and Geoffrion properly efficient solution for set-valued optimization problems. In Part 3, generalized inmonicity is introduced and its relationship with generalized invexity is established. Several examples are given to show that these generalized inmonicities are proper generalization of the corresponding generalized monotonicities. Moreover, some examples are also presented to illustrate the properly inclusive relations among the generalized inmonicities. Finally in Part 4, several second order symmetric duality models are provided for single-objective and multi-objective nonlinear programming problems. Weak and strong duality theorems are established under first order or second order generalized convexity assumptions. Our study extends some of known results in the recent literature. It is worth noting that special cases of our models and results can be reduced to the corresponding first order cases, but most of second order symmetric models presented in the literature do not possess this nice property.

Now showing items 1-20 of 98

Next PagePao Yue-kong Library, The Hong Kong Polytechnic University,

Hung Hom, Kowloon, Hong Kong

Privacy Policy Statement

Hung Hom, Kowloon, Hong Kong

Privacy Policy Statement