Derivative free optimization in higher dimension

Derivative free optimization in higher dimension

0.00 Avg rating0 Votes
Article ID: iaor20021401
Country: United Kingdom
Volume: 8
Issue: 3
Start Page Number: 285
End Page Number: 303
Publication Date: May 2001
Journal: International Transactions in Operational Research
Authors:
Keywords: neural networks
Abstract:

Non-linear optimizations that do not require explicit or implicit derivative information of an objective function are an alternate search strategy when the derivative of the objective function is not available. In factorial design, the number of trials for experimental identification method in Em is about (m + 1). These (m + 1) equally spaced points are allowed to form a geometry that is known as regular simplex. The simplex method is attributed to Spendley, Hext and Himsworth. The method is improved by maintaining a set of (m + 1) points in m dimensional space to generate a non-regular simplex. This study suggests re-scaling the simplex in higher dimensions for a restart phase. The direction of search is also changed when the simplex degenerates. The performance of this derivative free search method is measured based on the number of function evaluations, number of restart attempts and improvements in function value. An algorithm that decribes the improved method is presented and compared with the Nelder and Mead simplex method. The performance of this algorithm is also tested with artificial neural network (ANN) problem. The numbers of function evaluations are about 40 times less with the improved method against the Nelder and Mead method to train an ANN problem with 36 variables.

Reviews

Required fields are marked *. Your email address will not be published.