Article ID: | iaor20051493 |
Country: | United Kingdom |
Volume: | 14 |
Issue: | 2 |
Start Page Number: | 3 |
End Page Number: | 13 |
Publication Date: | Apr 2001 |
Journal: | OR Insight |
Authors: | Mathieson Graham |
The bedrock of military operational research for many years has been the use of combat models to convert measures of system performance into measures of force effectiveness. Quantifying effectiveness in the context of operations other than war, and taking account of the human and organisational aspects, has proved difficult using conventional modelling techniques. The need for multiple measures of merit and multiple decision criteria makes the use of assessment hierarchies very attractive to hard-pressed executives. There is also a trend, in these cost conscious times, to want cheaper and more common OA tools and methods across the full range of investment decision-making, from requirements capture, through design to investment appraisal. In all of these application areas assessment hierarchies appear to offer a relatively simple, highly visible and low cost means of assessing the value of complex investments. However, this appearance is dangerously deceptive. The relatively uncontrolled and unrigorous use of assessment hierarchies, combined with the self-reinforcing features of facilitated judgemental methods, can lead to questionable advice to decision-makers. Many previous treatments of this subject have focused on the details of judgement elicitation or mathematical manipulations, without fully addressing the larger issues of appropriateness and validity. This paper will discuss the principles and practice of the application of assessment hierarchies more rigorously. Drawing on recent study experiences in the areas of Intelligence and Information Systems, it will distinguish between estimating effectiveness and valuing performance, set out conditions for appropriate (and inappropriate) use of assessment hierarchies, and offer practical elements of good practice.