Article ID: | iaor201112751 |
Volume: | 58 |
Issue: | 3 |
Start Page Number: | 236 |
End Page Number: | 254 |
Publication Date: | Apr 2011 |
Journal: | Naval Research Logistics (NRL) |
Authors: | Leap Nathan J, Bauer Kenneth W |
Keywords: | statistics: inference, datamining, quality & reliability, simulation |
There is no universally accepted methodology to determine how much confidence one should place in the output of a classification system. In this article, we develop a confidence paradigm. This is a theoretical framework that attempts to unite the viewpoints of the classification system developer (or engineer) and the classification system user (or warfighter). The developer designs and tests the classification system at a macro-level. The user fields the system in an environment often quite different than the environment used to develop the system. The user operates at a micro-level and is interested in the indications as they are made by the system. The paradigm is based on the assumptions that the system confidence acts like or can be modelled as value, and that indication confidence can be modelled as a function of the posterior probability estimates. The viewpoints of the developer and the user are unified through the fundamental proposition that the expected value of the user's confidence should be approximately equal to the developer's confidence. This paradigm provides a direct link between traditional decision analysis techniques and traditional pattern recognition techniques. This methodology is applied to an automatic target recognition data set, and the results demonstrate the sort of behavior that would be expected from a rational confidence measure.