### The TSP Refinement Solution, Continued: 2/3 Swing

The work on the algorithm I spoke of in my last post bore fruit! It works very well and uses perhaps previously-unbeknownst-insight. The outline of the finished product I lay before you now.

The attached contains the empirical data I gathered about how well my algorithm works.

I received some surprises, and other things did not surprise me. I guessed my algorithm would perform somewhere around O[(n^3)log(n)], and as you can see by the graph, it is quite possible that it did. I was surprised to find, however, that the relationship between how well the greedy algorithm did versus how well my solver did (I call it the "2/3 swing" algorithm) was linear - about 1.3x better than greedy. Perhaps this is because I started out with greedy and can only get so far refining any given solution. Perhaps if I started out with a closer approximation, I might get closer to optimality.

You see, the way my algorithm works is that it takes an initial arbitrary solution - in the case of the data provided, the algorithm started out with a random greedy solution - and, essentially, then gets into a 2-opt local minima. But the insight I gained in creating this algorithm is that a 2-opt local minima is not necessarily a 3-opt local minima. In fact, it rarely is. This is like multi-dimensional gradient descent - as there is rarely a minima across all dimensions in gradient descent, so there is rarely a minima across all opt's. At least, that is the idea of the algorithm.

Here's another clincher: getting into a 2-opt minima for assymetric, incomplete graphs (TSP city maps) has roughly the same complexity as getting into a 3-opt minima, since a linearly-large portion of the path must be reversed for each check to see if the change is an improvement. This makes a thorough 2-opt check, whose search space is in O(n^2), take O(n^3) time. To find a 3-opt local minima, the search space is in O(n^3) and the check is in O(1), so finding a local 2-opt minima, then trying to find a local 3-opt minima, doesn't cost much more by way of complexity, residing only in a constant factor increase in time.

The way that this algorithm gets itself into O(n^3*log(n)) time (or O(n^3.22), if the graph's regression line is to be believed) is by continually "swinging" back and forth between using 2-opt and 3-opt to find the next minima, so that a series of operations in O(n^3) time is looped over a few times. Between using 2-opt and 3-opt, city swapping is employed (taking two arbitrary cities on the best route so far and switching their places as far as order of visitation if such a switch improves the route's score) so as to "mix the bag" between swings. This operation is a small subset of 4-opt, and because it is only in O(n^2) it does not hurt complexity much. Additionaly, since this algorithm works on asymmetric city maps, a reversal of the direction of the route is checked to see if it improves the score on each "swing". This check is linear in time and also does not greatly affect the complexity of the algorithm. Thus the algorithm follows:

best route so far = (initial-approximation-solution)

best cost = infinity

while best cost > best route so far's cost

best cost = best route so far's cost

reverse best route so far if that improves it's score

best route so far = (next 2-opt local minima)

best route so far = (next swap-cities local minima)

best route so far = (next 3-opt local minima)

best route so far = (next swap-cities local minima)

return best route so far

As I cannot post the original data, I will only post the graphs. Original data upon request.

Technical Data:

Platform used: C# on .NET 4, running on an Acer Aspire One 532 netbook

Initial solution (used in refinement): random greedy

City Map: Assymetric, Incomplete

Acknowledgements:

I used the C# TSP Framework provided by the CS department of Brigham Young University.

Code was used in studying this problem which was written by Jimmy Hales (see previous post), although all the code used in the 2/3 swing algorithm is original to the author of this post. Dr. Tony Martinez of Brigham Young University served as adviser during my study of this algorithm, and thanks to him I continued to work on it.

The attached contains the empirical data I gathered about how well my algorithm works.

I received some surprises, and other things did not surprise me. I guessed my algorithm would perform somewhere around O[(n^3)log(n)], and as you can see by the graph, it is quite possible that it did. I was surprised to find, however, that the relationship between how well the greedy algorithm did versus how well my solver did (I call it the "2/3 swing" algorithm) was linear - about 1.3x better than greedy. Perhaps this is because I started out with greedy and can only get so far refining any given solution. Perhaps if I started out with a closer approximation, I might get closer to optimality.

You see, the way my algorithm works is that it takes an initial arbitrary solution - in the case of the data provided, the algorithm started out with a random greedy solution - and, essentially, then gets into a 2-opt local minima. But the insight I gained in creating this algorithm is that a 2-opt local minima is not necessarily a 3-opt local minima. In fact, it rarely is. This is like multi-dimensional gradient descent - as there is rarely a minima across all dimensions in gradient descent, so there is rarely a minima across all opt's. At least, that is the idea of the algorithm.

Here's another clincher: getting into a 2-opt minima for assymetric, incomplete graphs (TSP city maps) has roughly the same complexity as getting into a 3-opt minima, since a linearly-large portion of the path must be reversed for each check to see if the change is an improvement. This makes a thorough 2-opt check, whose search space is in O(n^2), take O(n^3) time. To find a 3-opt local minima, the search space is in O(n^3) and the check is in O(1), so finding a local 2-opt minima, then trying to find a local 3-opt minima, doesn't cost much more by way of complexity, residing only in a constant factor increase in time.

The way that this algorithm gets itself into O(n^3*log(n)) time (or O(n^3.22), if the graph's regression line is to be believed) is by continually "swinging" back and forth between using 2-opt and 3-opt to find the next minima, so that a series of operations in O(n^3) time is looped over a few times. Between using 2-opt and 3-opt, city swapping is employed (taking two arbitrary cities on the best route so far and switching their places as far as order of visitation if such a switch improves the route's score) so as to "mix the bag" between swings. This operation is a small subset of 4-opt, and because it is only in O(n^2) it does not hurt complexity much. Additionaly, since this algorithm works on asymmetric city maps, a reversal of the direction of the route is checked to see if it improves the score on each "swing". This check is linear in time and also does not greatly affect the complexity of the algorithm. Thus the algorithm follows:

best route so far = (initial-approximation-solution)

best cost = infinity

while best cost > best route so far's cost

best cost = best route so far's cost

reverse best route so far if that improves it's score

best route so far = (next 2-opt local minima)

best route so far = (next swap-cities local minima)

best route so far = (next 3-opt local minima)

best route so far = (next swap-cities local minima)

return best route so far

As I cannot post the original data, I will only post the graphs. Original data upon request.

Technical Data:

Platform used: C# on .NET 4, running on an Acer Aspire One 532 netbook

Initial solution (used in refinement): random greedy

City Map: Assymetric, Incomplete

Acknowledgements:

I used the C# TSP Framework provided by the CS department of Brigham Young University.

Code was used in studying this problem which was written by Jimmy Hales (see previous post), although all the code used in the 2/3 swing algorithm is original to the author of this post. Dr. Tony Martinez of Brigham Young University served as adviser during my study of this algorithm, and thanks to him I continued to work on it.