The Selection of Routines (see figure below) section in the IMSL Fortran Library documentation for Chapter 8 on numerical optimization functions provides a helpful guide for navigating the broad array of available algorithms. The No Free Lunch Theorem  tells us that given any two numerical optimization algorithms applied to the infinite set of possible objective functions, neither algorithm can be declared superior, even over a random search. However, once the user knows something about the specifics of their problem, especially the form of the objective function, the choice of algorithm can be improved immensely.
The functions BCPOL (Fortran) and
imsl_f_min_con_polytope (C) are implementations of a direct search complex algorithm, designed for use when first- and second-derivatives are unavailable, based on the work by Nelder and Mead (1965) . In the IMSL documentation, this routine is described for use as a last resort when other methods fail. By design, this algorithm is a good selection for black-box type constrained optimization where the objective function is not smooth and no gradient information is available.
A powerful use case for BCPOL
Given the last-resort status of BCPOL, it is less common to see it in practice than other IMSL functions. Recently a customer in the financial services industry provided a use case that exemplifies where BCPOL is powerful, and they additionally requested some enhancements to allow access for finer tuning of the algorithm.
In the standard use case to optimize a function of n variables, the user supplies n lower and upper bounds on the variables and, optionally, an initial guess for the location of the minimum value. The algorithm then seeds the domain with an initial complex of 2n points chosen randomly. For each iteration, a new point is generated to replace the worst point, xj, in the complex. The worst point is the one with the largest function value in a minimization problem. The new reflection point xr is computed using:
Where c is the centroid defined by:
and α is defined as the reflection coefficient, greater than zero.
If this new point is a best point (i.e., the point with the smallest function value), then the algorithm expands around its area to determine if an even better point can be located. An expansion point xe is defined as:
where β is the expansion coefficient, greater than one.
Conversely, if xe and xj are the two worst points, then the complex is contracted in an attempt to arrive at a different, better, point xc:
where ϒ is the contraction coefficient, between zero and one. If this contraction step does not find a better point than xj, then the complex is shrunk by moving the 2n – 1 worst points toward the current best point.
These comparisons are key to the complex search algorithm as only function call results are available, with no gradients to indicate direction of the objective function. Iterations are repeated until one of the stopping criteria are met. Additional details of the mathematics of these steps is available in the documentation for the algorithm including definitions of the stopping criteria.
As mentioned, for a black-box objective function, this algorithm is quite robust. However, there are minimal ways to tweak its performance. If the algorithm terminates too quickly, too far away from a minimum, the user can shrink the tolerance of the first convergence criterion. If there is additional knowledge of the domain, an initial guess can be provided to help guide the initial iterations. Finally, the objective function can be carefully crafted using the concept of a function penalty where an arbitrarily large value is returned in an attempt to further constrain the solution space.
This IMSL customer desired additional control over the algorithm than allowed through the existing programming interface. In each step of computing the reflection, expansion, and contraction there are parameters of α, β, and ϒ as part of the equations. The default values are α = 1.0, β = 2.0 and ϒ = 0.5, which appear to satisfy most users. When searching to fine tune a problem, especially the situation where function calls are expensive, a knowledgeable user could help guide the algorithm by tweaking these coefficients. Further, a user may wish to provide an initial complex instead of letting the algorithm choose it randomly.
Any general software library interface must balance simplicity and power. If an API is too simple, it will be limited in the scope of problems it can solve; conversely when an API is complex, the usefulness for non-experts is reduced. Understanding the utility of allowing users to modify these three parameters and to provide an initial complex, the IMSL development team has added them as optional arguments to BCPOL in the IMSL Fortran Library version 7.1.1. This algorithm is now available as
imsl_f_min_con_polytope in the IMSL C Library version 2016, available for download today. The IMSL C API also includes access to the additional parameters.
The field of numerical optimization is vast and deep. Many algorithms trace their roots back decades, but as present-day problems are applied, modern users may have more demands than their predecessors. As a broad mathematical and statistical library, the IMSL Library suite continues to adapt to customer requests through its nearly 45 years of active development.
To discover more information on the IMSL algorithms discussed here, visit IMSL Numerical Libraries.
 Wolpert, D. H. and W. G. Macready, No free lunch theorems for optimization, IEEE Transactions on Evolutionary Computation 1 (1), pp. 67–82, 1997.
 Nelder, J. A. and R. Mead, A simplex method for function minimization, The Computer Journal 7 (4), pp. 308-313, 1965.