Article ID: | iaor19982409 |
Country: | Netherlands |
Volume: | 75 |
Issue: | 1 |
Start Page Number: | 189 |
End Page Number: | 208 |
Publication Date: | Dec 1997 |
Journal: | Annals of Operations Research |
Authors: | Sadeh Norman M., Nakakuki Yoichiro, Thangiah Sam R. |
Keywords: | vehicle routing & scheduling |
Simulated Annealing (SA) procedures can potentially yield near-optimal solutions to many difficult combinatorial optimization problems, though often at the expense of intensive computational efforts. The single most significant source of inefficiency in SA search is the inherent stochasticity of the procedure, typically requiring that the procedure be rerun a large number of times before a near-optimal solution is found. This paper describes a mechanism that attempts to learn the structure of the search space over multiple SA runs on a given problem. Specifically, probability distributions are dynamically updated over multiple runs to estimate at different checkpoints how promising an SA run appears to be. Based on this mechanism, two types of criteria are developed that aim at increasing search efficiency: (1) a cutoff criterion, used to determine when to abandon unpromising runs, and (2) restart criteria, used to determine whether to start a fresh SA run or restart search in the middle of an earlier run. Experimental results obtained on a class of complex job shop scheduling problems show (1) that SA can produce high quality solutions for this class of problems, if run a large number of times, and (2) that our learning mechanism can significantly reduce the computation time required to find high-quality solutions to these problems. The results also indicate that, the closer one wants to be to the optimum, the larger the speedups. Similar results obtained on a smaller set of benchmark Vehicle Routing Problems with Time Windows suggest that our learning mechanisms should help improve the efficiency of SA in a number of different domains.