Article ID: | iaor1993903 |
Country: | United States |
Volume: | 40 |
Issue: | 5 |
Start Page Number: | 867 |
End Page Number: | 876 |
Publication Date: | Sep 1992 |
Journal: | Operations Research |
Authors: | Gani J., Yakowitz S., Hayes R. |
Keywords: | health services, computers, stochastic processes |
Following an outline of dynamic Markoc fields, the authors briefly describe some spatial models for contagious diseases and pose a prototype epidemic control problem. The notion of automatic learning is then introduced, and its relevance to epidemic control is described. In essence, once a contagion model is adopted and a domain of controls has been selected, learning can be used to obtain asymptotically optimal performance. (The learning algorithm is a synthesis of simulation and optimization, and is a suitable alternative to response surface methodology, in many applications.) The end product is the same optimal control as would be obtained by a conventional analysis. The point is that the present current understanding of dynamic Markov fields does not permit conventional analysis; automatic learning has no computationally competitive alternative. The theory is illustrated by application to a spatial epidemic control problem.