Palisade Knowledge Base

HomeTechniques and Tips@RISK Distribution FittingP-Values and Distribution Fitting

4.8. P-Values and Distribution Fitting

Applies to: @RISK 5.x–7.x

How do I get p-values, critical values, and confidence intervals of parameters of fitted distributions?

In the Fit Distributions to Data dialog, on the Bootstrap tab, tick the box labeled "Run Parametric Bootstrap". You can also specify the number of resamples, and your required confidence level for the parameters. Bootstrapping will take extra time in the fitting process, particularly if you have a large data set.

Click the Fit button as usual. You'll see a pop-up window tracking the progress of the bootstrap.

When the fit has finished, you can click the Statistical Summary icon (last of the small icons at the bottom) to see an exhaustive chart. Or you can select one distribution in the list at the left and click the Bootstrap Analysis icon (second from right) to see just fit statistics and p-values, or just parameter confidence intervals, for that one distribution. If the information is not available because the bootstrapping failed, you will see a box "Unable to refit one or more bootstrap resamples."

Why doesn't @RISK give p-values for the Kolmogorov-Smirnov and Anderson-Darling tests for most fits?  Why do the ones that @RISK does give disagree with other software packages?

Basically, the p-values require knowledge of the sampling distribution of the K-S or A-D statistic.  In general this sampling distribution is not known exactly, though there are some very particular circumstances where it is.

While we don't know the exact methodology that other packages use, it is true that there are a number of ways to deal with this problem.  The method @RISK takes is very cautious.  If we cannot report the p-value, either we report a possible range of values it could be (if we can determine that) or we don't return a value at all.  Some people will choose the "no-parameters-estimated case", which can be determined in many cases, but which returns an ultra-conservative answer.  A good reference for how @RISK handles this can be found in the book Goodness-of-Fit Techniques by D'Agostino and Stephens.

Do you have any cautions for my use of p-values?

Sometimes too much stress is laid on p-values in distribution fitting.  It's really not valid to select a p-value as a "bright-line test" and say that any fit with a higher p-value is good and any fit with a lower p-value is bad.  There is no substitute for looking at the fitted distribution overlaid on the data.

We recommend against using the p-values for your primary determination of which distribution is the best one for your data set. For some guidance, see "Fit Statistics" in the @RISK help file or in Appendix A of the user manual, and Interpreting AIC Statistics in this Knowledge Base.

Last edited: 2017-06-29

This page was: Helpful | Not Helpful