Friday, July 16, 2010

A different run (Part 3): Running OptQuest with low-confidence testing at Extreme Speed

This is the third part of a series of posts detailing the reasons of having different simulation output when rerunning Monte Carlo simulation models using CB. The other parts can be found here:
Part 1: CB Excel functions are not tied to seed
Part 2: Excel RAND() function in CB models - handle with care
Part 4: Multi-threading and seed

This post deals with a rather obscure setting which can lead to this situation.  In the Options tab in OptQuest, you will find an option called "Enable low-confidence testing" in the "Advanced Option.." dialog box (as shown below).

As mentioned in the dialog box, this setting improves the optimization time by stopping simulations early if the solution appears to be inferior to the best solution. But this option doesn't quite work as well when the simulation is run in extreme speed. The reason is that, in extreme speed, the random numbers are generated in brusts, and the size of these bursts can vary from simulation to simulation. Naturally, the statistics which gets calculated from the trials can also vary based on these burst size. As an example, let's say that the first time you run an optimization model with this setting enabled, the burst size was 500, and the engine decided to stop early as the solution seemed to be inferior. In the next optimization run, the burst size may be 400, and the trials generated when stopping the simulations might be different.

Workaround: Do not use this option if you want to reproduce optimization results.

No comments:

Post a Comment