Finding the minimum of a function solves many problems where the minimum tell you what the best solution is. For example, the minimum cost to construct a building, or fit of an equation to noisy data that produces the least overall error. Many of us learned how to find the root of a polynomial by Newton's method. This classic method works great in simple cases, but as we know, not for complex roots where the polynomial does not cross the y-axis to reach a minimum in a local area. Similarly, for a function of cost of several independent variables (multidimensional) the gradient method that starts from one point and follows the downward slope of a cost function (often called an "objective function" in mathematics, because it is an unbiased, or objective method to find the best answer) to a minimum value is also a classic method that works very well in simple cases.

But what if the problem requires you to find the deepest pothole in a region of foothills and valleys full of potholes? The simple classical methods don't work very well because they can only find the bottom of the nearest pothole. Which might be on top of a ridge, and very far from the best answer. This problem often occurs either where the cost function is very complex, or where the measurements that determine the cost function are very noisy. The cost function an even be like finding the bottom of the throat of a volcano, a deep well on top of a mountain that goes below the mountain's roots. This kind of problem often requires a very robust method that can jump in and out of potholes until it comes across the deepest one.

One such method is simulated annealing. It may calculate the objective function millions of times to find the best probable answer, and is made possible by modern computers. A copy from the www.netlib.org web site, SIMANN.FOR, has such good comments written in it that it virtually explains itself. Since this program was written by economists, it actually maximizes a profit function. But in the hydraulics examples here, the sign of the function is changed to make it minimze a cost function.

In short, it is a conceptual model of the annealing of metal by slowly reducing its heat. This kind of annealing lets the atoms in the metal slowly settle into the positions where the metal has the least stress, a kind of optimum for some applications. Simulated annealing starts from a guess of the best answer and uses a random number generator to make the independent variables jump all over their ranges, and even out of it. When an independent variable of the cost function jumps out of its allowed range, the random number generator is simply used again to set a position within its allowed range. The method actually jumps uphill part of the time, to see if a better answer is just over a ridge in the cost function. It uses a probability function to decide whether or not to temporarily accept an uphill jump. The maximum step size over which an independent variable can randomly jump is adjusted up or down so that the number of jumps uphill in the cost function that the method accepts is nearly equal to the number that it rejects. It always accepts a downhill jump, and always keeps track of the best yet, to which it returns periodically.

The probability function used to decide to keep track of an uphill jump is related to a program variable called the "temperature". Periodically, when the program returns to the best minium, it reduces the "temperature". It is designed to gradually reduce the maximum size of the random jumps to a small value, according to the sensitivity of the cost function to the independent variable. If everything works the way it is supposed to, by this time it has found the deepest pothole and is simply circling the bottom like a dog going to bed. It stops and produces the answer, a set of values of the independent variables, the maximum step sizes, and the cost function. The goodness of the fit is also indicated by the remaining size of the maximum step length. If an independent variable does not really affect the value of the cost function, its maximum step size never reduces to a small value.

The investigator (programmer) can change anything in the program, usually only a few things need to be changed to fit a problem, and the program can otherwise be used like a black box. For the most part, these are the cost/objective function, the initial conditions (starting values) of the independent variables, and the boundaries of the independent variables. The investigator may also wish to modify the program to read from a file something like measurement data to which the independent variable must be fitted.

In this example --> simanh9.for, I've turned the simann.f program into the subroutine, siman, which the main program calls after reading in the input data, and making some transformations. It fits a head-discharge equation