Article ID: | iaor201524047 |
Volume: | 61 |
Issue: | 8 |
Start Page Number: | 557 |
End Page Number: | 576 |
Publication Date: | Dec 2014 |
Journal: | Naval Research Logistics (NRL) |
Authors: | Lin Kyle Y, Glazebrook Kevin D, Atkinson Michael P |
Keywords: | optimization, graphs, military & defence |
Consider a patrol problem, where a patroller traverses a graph through edges to detect potential attacks at nodes. An attack takes a random amount of time to complete. The patroller takes one time unit to move to and inspect an adjacent node, and will detect an ongoing attack with some probability. If an attack completes before it is detected, a cost is incurred. The attack time distribution, the cost due to a successful attack, and the detection probability all depend on the attack node. The patroller seeks a patrol policy that minimizes the expected cost incurred when, and if, an attack eventually happens. We consider two cases. A random attacker chooses where to attack according to predetermined probabilities, while a strategic attacker chooses where to attack to incur the maximal expected cost. In each case, computing the optimal solution, although possible, quickly becomes intractable for problems of practical sizes. Our main contribution is to develop efficient index policies–based on Lagrangian relaxation methodology, and also on approximate dynamic programming–which typically achieve within 1% of optimality with computation time orders of magnitude less than what is required to compute the optimal policy for problems of practical sizes.