Article ID: | iaor201526502 |
Volume: | 35 |
Issue: | 6 |
Start Page Number: | 1101 |
End Page Number: | 1113 |
Publication Date: | Jun 2015 |
Journal: | Risk Analysis |
Authors: | Weimer David L |
Keywords: | government, risk, statistics: inference, simulation |
Policy analysis often demands quantitative prediction–especially cost‐benefit analysis, which requires the comprehensive quantification and monetization of all valued impacts. Using parameter estimates and their precisions, analysts can apply Monte Carlo simulation to create distributions of net benefits that convey the levels of certainty about the fundamental question of interest: Will net benefits be positive if the policy is adopted? An inappropriate focus on hypothesis testing of parameters rather than prediction sometimes leads analysts to treat statistically insignificant coefficients as if they, and their standard errors, are zero. One alternative method is to use all estimates and their standard errors whether or not the estimates are statistically significant. Another alternative is to use all estimates but to shrink them toward zero and adjust their standard errors in an effort to guard against regression to the mean. Comparing the three methods (only use statistically significant estimates and their standard errors, use all estimates and their standard errors, use shrunk estimates and adjusted standard errors) in Monte Carlo simulation suggests that treating statistically insignificant coefficients as zero rarely minimizes the mean squared error of prediction. Using shrunk estimates appears to provide a more robust minimization of the mean squared error of prediction. The simulations presented here suggest that routinely shrinking estimates is a robust approach if one believes that there is a substantial probability that the true value of the parameter is near zero.