Home → Techniques and Tips → @RISK Simulation: Numerical Results → Static Value of Input Differs from Simulated Mean
Applies to:
@RISK, all releases
RISKOptimizer,all releases
@RISK Developer's Kit (RDK), all releases
Why is there a difference between the expected value that appears in the spreadsheet cell that contains an @RISK function and the mean of the simulation results for the input?
The simulated mean of a distribution will typically be close to the theoretical mean, but not exactly the same. This is normal statistical behavior. To illustrate, set up a simulation in the following way:
Start with a blank workbook.
In cell A1, define an @RISK input with a normal distribution that has a mean of 10 and a standard deviation of 1.
Select the Simulation Settings icon from the @RISK toolbar. On the Iterations tab, set the number of iterations to 10,000. On the Sampling tab, select Latin Hypercube for the sampling type and a fixed random generator seed of 1.
Statistical theory tells us that the expected distribution for the mean of the input is a normal distribution with a mean of 10 and a standard deviation (often called the standard error) of 1/√10000 or 0.01. Although the exact results will vary by version of @RISK, you should find that the mean for the simulated input is well within the interval 9.99 to 10.01, within one standard error of the theoretical mean. (The default Latin Hypercube sampling type does considerably better than classic Monte Carlo sampling.)
But the displayed value is not even close to the simulated mean of the output. How can that be?
Most likely this is a setting in your model. In the Simulation Settings dialog, look at When a Simulation is Not Running and verify that it is set to Static Values. Set Where RiskStatic is Not Defined, Use to True Expected Values.
As the dialog box implies, a RiskStatic property function in any distribution will make that distribution display the RiskStatic value instead of the statistic you select in Static Values.
See also:
Last edited: 2015-11-26