Article ID: | iaor20173313 |
Volume: | 42 |
Issue: | 3 |
Start Page Number: | 692 |
End Page Number: | 722 |
Publication Date: | Aug 2017 |
Journal: | Mathematics of Operations Research |
Authors: | Ying Lei, Srikant R, Kang Xiaohan |
Keywords: | networks: scheduling, combinatorial optimization, queues: applications, decision |
In many computing and networking applications, arriving tasks have to be routed to one of many servers, with the goal of minimizing queueing delays. When the number of processors is very large, a popular routing algorithm works as follows: select two servers at random and route an arriving task to the least loaded of the two. It is well known that this algorithm dramatically reduces queueing delays compared to an algorithm, which routes to a single randomly selected server. In recent cloud computing applications, it has been observed that even sampling two queues per arriving task can be expensive and can even increase delays due to messaging overhead. So there is an interest in reducing the number of sampled queues per arriving task. In this paper, we show that the number of sampled queues can be dramatically reduced by using the fact that tasks arrive in batches (called jobs). In particular, we sample a subset of the queues such that the size of the subset is slightly larger than the batch size (thus, on average, we only sample slightly more than one queue per task). Once a random subset of the queues is sampled, we propose a new load‐balancing method called