The derivative‐free global optimization algorithms developed in Part I of this study, for linearly constrained problems, are extended to nonconvex n‐dimensional problems with convex constraints. The initial
feasible datapoints are chosen in such a way that the volume of the simplex generated by these datapoints is maximized; as the algorithm proceeds, additional feasible datapoints are added in such a way that the convex hull of the available datapoints efficiently increases towards the boundaries of the feasible domain. Similar to the algorithms developed in Part I of this study, at each step of the algorithm, a search function is defined based on an interpolating function which passes through all available datapoints and a synthetic uncertainty function which characterizes the distance to the nearest datapoints. This uncertainty function, in turn, is built on the framework of a Delaunay triangulation, which is itself based on all available datapoints together with the (infeasible) vertices of an exterior simplex which completely contains the feasible domain. The search function is minimized within those simplices of this Delaunay triangulation that do not include the vertices of the exterior simplex. If the outcome of this minimization is contained within the circumsphere of a simplex which includes a vertex of the exterior simplex, this new point is projected out to the boundary of the feasible domain. For problems in which the feasible domain includes edges (due to the intersection of multiple twice‐differentiable constraints), a modified search function is considered in the vicinity of these edges to assure convergence.