Friday, November 18, 2011

Accuracy of solutions in OR solvers

In the INFORMS annual meeting 2011 which just concluded (I had a great time, by the way), I attended a talk on improving the accuracy of optimization solver. It was a well-attended talk, and the method was also interesting. The method was in the context of LP and MIPs. The speaker suggested that, once the initial optimization solution was found, to reformulate the problem by doing the following (if I have got this right):
  • Scale the problem so that the number of decimal places gets near the decimal point. E.g., if the discrepancy is coming at 6-th places of decimal, scaling it so that the discrepancy appear at 1st or 2nd place of decimal.
  • Consider only active or near-active constraints (the constraint set can be filtered using the shadow prices of each constraint).
  • Solve the problem using warm-start from the previous solution. Iterate this scheme a few times.
I guess that this scheme can be extended to constrained nonlinear  programming problems as well (and, I think, the author mentioned as much), but my first instinctive question was completely different. I am not sure why someone would do this, i.e., need that much of accuracy, in the first place, other than bragging rights when competing with other solvers. Unless one is solving textbook problems with given data, typical data accuracy does not go beyond the decimal point, and even if it does, seldom beyond 1-2 places of decimal. Why solve a problem accurately upto 10 places of decimal when the input data is not that accurate?

Now, I am aware of industries where the data are often between 0 and 1, for example chemical processes, but shouldn't we be rescaling these problems before solving, rather than doing the above procedure after solving? May be I am missing something here !!


  1. Good point about the accuracy of the input data -- not to mention that, in many models, enough assumptions have been made that the model is only a very approximate representation of reality. That said, there are some models that are numerically unstable (in some cases due to poor scaling), so that grossly incorrect "solutions" can be reported by the software. Perhaps that's what the presenters had in mind?

  2. @Paul: Agree. I hope the presenter had something similar to what you are saying in mind, otherwise chasing behind that level of accuracy definitely seems futile.