Article ID: | iaor201526248 |
Volume: | 9 |
Issue: | 5 |
Start Page Number: | 845 |
End Page Number: | 865 |
Publication Date: | Jun 2015 |
Journal: | Optimization Letters |
Authors: | Regis Rommel |
Keywords: | heuristics |
Simplex gradients are widely used in derivative‐free optimization. This article clarifies some of the properties of simplex gradients and presents calculus rules similar to that of an ordinary gradient. For example, the simplex gradient does not depend on the order of sample points in the underdetermined and determined cases but this property is not true in the overdetermined case. Moreover, although the simplex gradient is the gradient of the corresponding linear model in the determined case, this is not necessarily true in the underdetermined and overdetermined cases. However, the simplex gradient is the gradient of an alternative linear model that is required to interpolate the reference data point. Also, the negative of the simplex gradient is a descent direction for any interpolating linear function in the determined and underdetermined cases but this is again not necessarily true for the linear regression model in the overdetermined case. In addition, this article reviews a previously established error bound for simplex gradients. Finally, this article treats the simplex gradient as a linear operator and provides formulas for the simplex gradients of products and quotients of two multivariable functions and a power rule for simplex gradients.