A brief comparison of simulated annealing and genetic algorithm approaches

 

Jukka Kohonen

 

Term paper for the "Three Concepts: Utility" course

15.12.1999

 

Department of Computer Science

University of Helsinki

 

 

 

Abstract

Simulated annealing (SA) and genetic algorithms (GA) are two stochastic methods currently in wide use for difficult optimization problems. This paper briefly compares their theoretical backgrounds and discusses their empirical comparison.


1. Introduction

Simulated annealing and genetic algorithms are often viewed as quite separate, competing paradigms in the field of modern heuristics. Although they are known as relatives, it seems quite common to think that there is a great dividing line between SA and GA, and then some smaller differences between variants within each paradigm.

This line of thought easily leads to simple empirical comparisons where one specific SA implementation is matched against one specific GA implementation, and sweeping generalizations are made from the results, to the effect that "GA clearly outperforms SA in this field of problems" (or vice versa). Such comparisons may be grossly misleading.

In reality, it seems that the two approaches are closer relatives than is commonly thought, and meaningful comparisons require careful consideration, both theoretical and empirical.

2. Theoretical comparison

Theoretically, SA and GA are quite close relatives, and much of their difference is superficial. The two approaches are usually formulated in ways that look very different, and using very different terminology.

With SA, one usually talks about solutions, their costs, and neighbors and moves; while with GA, one talks about individuals (or chromosomes), their fitness, and selection, crossover and mutation. This difference in terminology of course reflects the differences in emphasis, but also serves to obscure the similarities and the real differences between SA and GA.

Basically, SA can be thought as GA where the population size is only one. The current solution is the only individual in the population. Since there is only one individual, there is no crossover, but only mutation.

This is in fact the key difference between SA and GA. While SA creates a new solution by modifying only one solution with a local move, GA also creates solutions by combining two different solutions. Whether this actually makes the algorithm better or worse, is not straightforward, but depends on the problem and the representation.

It should be noted that both SA and GA share the fundamental assumption that good solutions are more probably found "near" already known good solutions than by randomly selecting from the whole solution space. If this were not the case with a particular problem or representation, they would perform no better than random sampling.

What GA does differently here is that it treats combinations of two existing solutions as being "near", making the assumption that such combinations (children) meaningfully share the properties of their parents, so that a child of two good solutions is more probably good than a random solution. Again, if for a particular problem or representation this is not the case, then GA will not provide an advantage over SA.

This obviously depends on what the crossover operator is. If the crossover operator is poorly chosen in respect to the problem and its representation, then a recombination will effectively be a random solution. As noted in [4], this kind of destructive crossover often results with combinatorial problems if a chromosome directly expresses a solution, and can sometimes be cured by choosing a different representation, where a chromosome is thought of as a "genotype" that only indirectly expresses a solution, a "phenotype". This approach, with two levels of solution representation, has traditionally been specific to GA, but there is no reason why a similar approach could not be applied in SA as well.

Also, it should be noted that the relative weight given to mutation and recombination is a crucial parameter affecting what a GA actually does. If mutation is the dominant way of creating new solutions, then the GA in fact acts as a parallelized version of SA, where several solutions are being independently improved.

From the practical viewpoint, it should noted that for some problems, evaluating solutions that are near an existing solution may be very efficient, which may give a big performance advantage to SA, when compared to GA, if evaluating recombined solutions is not so efficient.

3. Empirical comparisons

It is surprising how often the execution time is quite neglected in empirical comparisons of  optimization algorithms for combinatorial problems. In my view, it should be a key element of any such comparison.

After all, if there were no limits on execution time, one could always perform a complete search, and get the best possible solution. Most stochastic algorithms can do the same, given unlimited time. In practice, there are always some limits on the execution time.

Also, a key property of stochastic algorithms such as SA and GA is that given more time, they usually provide better solutions, at least up to some limit. If, in an empirical comparison, algorithm A is allowed to use more time than algorithm B, their solution qualities are no longer comparable, since there is no indication on how good solutions algorithm A would have produced given the same time as algorithm B.

It is, of course, possible that for short time limits, one algorithm outperforms, while for longer time limits, the other one does. There are some hints in literature that this is the case with SA and GA; to be more precise, some comparisons seem to claim that SA is a "quick starter" which obtains good solutions in a short time, but is not able to improve on that given more time, while GA is a "slow starter" that is able to improve the solution consistently when given more time. However, any such claim is in doubt unless it is explicitly stated and tested, by running each algorithm both for a short time and for a long time.

Let us then see some examples of empirical comparisons.

In [1], there is a good discussion on how a meaningful empirical comparison should be done. As an example, they consider a tree cost minimization problem. They compare several algorithms including SA and GA, and carefully normalize the execution time given to different algorithms. Their results indicate that given the same amount of time, SA consistently gave better solutions than GA.

In [2], Manikas and Cain compare SA and GA for a circuit partitioning problem. They very carefully analyze the statistical confidence of the results, when comparing approximately 20 trials with each algorithm. However, there is no mention of the execution time used. Still, they conclude that "the genetic algorithm was shown to produce solutions equal to or better than simulated annealing". This conclusion may be true, but its relevance is in doubt because of the missing information.

In [3], Mann and Smith compare SA and GA for a traffic routing problem. They do report the execution times, but again, the comparison mainly focuses on solution costs. The execution times of the GA were from 10 to 24 times longer than those of the SA. They report that GA gave slightly better solutions than SA, but they also note that the SA achieved its solutions much quicker. However, they do not report what happens if the algorithms are given the same amount of time.

 

References

[1]  Empirical comparison of stochastic algorithms. In Proceedings of the Second Nordic Workshop on Genetic Algorithms and their Applications. Available in the Web: <http://www.uwasa.fi/cs/publications/2NWGA/2NWGA.html>.

[2] Theodore W. Manikas and James T. Cain. Genetic Algorithms vs. Simulated Annealing: A Comparison of Approaches for Solving the Circuit Partitioning Problem. University of Pittsburgh, Dept. of Electrical Engineering, May 1996. Available in the Web: <http://www.pitt.edu/~twmst5/tr-96-101.html>.

[3] Jason W. Mann and George D. Smith. A Comparison of Heuristics for Telecommunications Traffic Routing. In Rayward-Smith et al. (editors), Modern Heuristic Search Methods. John Wiley & Sons, 1996.

[4] J. David Schaffer and Larry J. Eschelman. Combinatorial Optimization by Genetic Algorithms: The Value of the Genotype/Phenotype Distinction. In Rayward-Smith et al. (editors), Modern Heuristic Search Methods. John Wiley & Sons, 1996.