Article ID: | iaor20053295 |
Country: | Netherlands |
Volume: | 160 |
Issue: | 1 |
Start Page Number: | 121 |
End Page Number: | 138 |
Publication Date: | Jan 2005 |
Journal: | European Journal of Operational Research |
Authors: | Hamers Herbert, Hertog Dick den, Brekelmans Ruud, Driessen Lonneke |
Keywords: | programming: nonlinear |
This paper presents a new sequential method for constrained nonlinear optimization problems. The principal characteristics of these problems are very time consuming function evaluations and the absence of derivative information. Such problems are common in design optimization, where time consuming function evaluations are carried out by simulation tools (e.g., FEM, CFD). Classical optimization methods, based on derivatives, are not applicable because often derivative information is not available and is too expensive to approximate through finite differencing. The algorithm first creates an experimental design. In the design points the underlying functions are evaluated. Local linear approximations of the real model are obtained with help of weighted regression techniques. The approximating model is then optimized within a trust region to find the best feasible objective improving point. This trust region moves along the most promising direction, which is determined on the basis of the evaluated objective values and constraint violations combined in a filter criterion. If the geometry of the points that determine the local approximations becomes bad, i.e. the points are located in such a way that they result in a bad approximation of the actual model, then we evaluate a geometry improving instead of an objective improving point. In each iteration a new local linear approximation is built, and either a new point is evaluated (objective or geometry improving) or the trust region is decreased. Convergence of the algorithm is guided by the size of this trust region. The focus of the approach is on getting good solutions with a limited number of function evaluations.