Efficient Ranking and Selection in Parallel Computing Environments

Efficient Ranking and Selection in Parallel Computing Environments

0.00 Avg rating0 Votes
Article ID: iaor20171636
Volume: 65
Issue: 3
Start Page Number: 821
End Page Number: 836
Publication Date: Jun 2017
Journal: Operations Research
Authors: , , ,
Keywords: computational analysis: parallel computers, stochastic processes, simulation, design, decision, queues: applications, networks: scheduling
Abstract:

The goal of ranking and selection (R&S) procedures is to identify the best stochastic system from among a finite set of competing alternatives. Such procedures require constructing estimates of each system’s performance, which can be obtained simultaneously by running multiple independent replications on a parallel computing platform. Nontrivial statistical and implementation issues arise when designing R&S procedures for a parallel computing environment. We propose several design principles for parallel R&S procedures that preserve statistical validity and maximize core utilization, especially when large numbers of alternatives or cores are involved. These principles are followed closely by our parallel Good Selection Procedure (GSP), which, under the assumption of normally distributed output, (i) guarantees to select a system in the indifference zone with high probability, (ii) in tests on up to 1,024 parallel cores runs efficiently, and (iii) in an example uses smaller sample sizes compared to existing parallel procedures, particularly for large problems (over 106 alternatives). In our computational study we discuss three methods for implementing GSP on parallel computers, namely the Message‐Passing Interface (MPI), Hadoop MapReduce, and Spark, and show that Spark provides a good compromise between the efficiency of MPI and robustness to core failures. The e‐companion is available at https://doi.org/10.1287/opre.2016.1577.

Reviews

Required fields are marked *. Your email address will not be published.