Saturday, November 5, 2011

Sensitivity chart does not converge in Crystal Ball

Recently I received a support request on a sensitivity chart issue, where the top few assumptions contributing to the variance of the target forecast always seem to change when the simulation is reset and run (and the seed is not set). In general, even if the seed is not set, one should expect convergence of the list of top contributor assumptions when enough trials are run. Even if enough trials are not run, the list should not vary a lot with each reset.

The model was simple. There were around ten assumptions, linked to one forecast through a series of formulas. The flowchart below (Figure 1) shows the type of formulas that were being used.
Figure 1: Model Flowchart

While running the model, two things were noticed:
  1. As mentioned before, the sensitivity charts do not stabilize even using a large number of trials (either there is a change in the contribution to variance value, or the position of a variable on the chart or both).
  2. Switching to the Sensitivity data view, it was noticed that the rank correlation values are at or near zero (considering upto 2 places of decimal). That would indicate spurious correlations, but in this model, that should not be the case.
These are signs that the sensitivity chart may not be working as intended and prompted me to check the limitations of the sensitivity chart (also available in the User's Guide, installed with your installation of CB. Available from Excel at: Excel 2007 > Crystal Ball tab > Resources > Crystal Ball Documentation, and then, Analyzing Other Charts > Understanding and Using Sensitivity Charts > Limitations of Sensitivity Charts). This turned out to be a classic case of one of the limitations of our sensitivity analysis: the analysis does not support non-monotonic relationship between the assumptions and the target forecast. Looking at the formulas in the model, the forecast is an aggregate of functions containing the absolute values applied on assumptions. That makes the relationship between the assumptions and the forecasts nonmonotonic, and hence the problem.

To ascertain that this is indeed the problem, and the relationships between the assumptions and the target forecast are non-monotonic, we can run the Tornado chart on the model and look at the spider chart. This chart can help identify which variables have a non-monotonic relationship with the target forecast. Looking at the spider chart (Figure 2), we see that this is indeed the case.
Figure 2: Spider Chart
The chart shows that each line is broken in the middle, which signifies non-monotonicity. If the relationship was monotonic, then the curve for each variable would have been a straight line from the left end to the right end.

Update (11/7/2011): Language.
Update (12/8/2011): Language.

No comments:

Post a Comment