In the INFORMS annual meeting 2011 which just concluded (I had a great time, by the way), I attended a talk on improving the accuracy of optimization solver. It was a well-attended talk, and the method was also interesting. The method was in the context of LP and MIPs. The speaker suggested that, once the initial optimization solution was found, to reformulate the problem by doing the following (if I have got this right):
- Scale the problem so that the number of decimal places gets near the decimal point. E.g., if the discrepancy is coming at 6-th places of decimal, scaling it so that the discrepancy appear at 1st or 2nd place of decimal.
- Consider only active or near-active constraints (the constraint set can be filtered using the shadow prices of each constraint).
- Solve the problem using warm-start from the previous solution. Iterate this scheme a few times.
Now, I am aware of industries where the data are often between 0 and 1, for example chemical processes, but shouldn't we be rescaling these problems before solving, rather than doing the above procedure after solving? May be I am missing something here !!