Ian Tullis, CS267 HW0: Ant Colony Optimization

Ant Colony Optimization

About me

My background is in chemistry (BS) and environmental science (MS), and I'm currently a lecturer in Integrative Biology. I also enjoy writing puzzles (some of which are constrained enough to require coding) for my side career with a company that puts on puzzle events for corporations and the general public. My interest in the "pheromones" and wandering "ants" of Ant Colony Optimization (ACO) doesn't actually stem from my chem/bio background, but it is apropos in light of my own nonlinear career path...

About ACO

Ant Colony Optimization is a metaheuristic that can be applied to a variety of optimization problems. It is based (to some extent) on the path-finding behavior of real ant colonies, as witnessed in this "double-bridge" experiment, for example. When a colony is given a choice of two paths of unequal length that lead to a food source, it will eventually converge on the shorter path via a positive feedback loop: although individual ants traveling down both paths leave pheromone trails, the ants that choose the shorter path reach the food and return more rapidly. As a result, the shorter path accumulates more pheromones and will be chosen by more ants, etc. This convergence is an example of "swarm intelligence".

In ACO, a number of simulated "ants" move throughout a search space. The ants' paths are determined randomly at first, but as individual ants reach a goal state, they retrace the paths that led them there (first eliminating any loops), leaving behind "pheromones" (only on the way back). To prevent this positive feedback loop from fixating on a suboptimal solution (as real ants will), pheromones dissipate rapidly, and ants always have a chance of exploring less well-trodden paths.

ACO is not inherently parallel, but opportunities for coarse (colonies per processor) and fine (one or few ants per processor) parallelism exist. The heuristic would seem to be far from "embarrassingly parallelizable", though, since each ant can create pheromones and all ants need to know the most up-to-date pheromone levels for each path before they can make decisions.

Application: Parallelizing the Traveling Salesman Problem

The Traveling Salesman Problem, a spatial pathfinding problem with many applications (e.g., to circuit board design), is a natural fit for (and a well-studied topic within) ACO. I will describe one TSP-based ACO study, by Manfrin et al. (2006), that I found interesting.

Manfrin's group parallelized an implementation of ACO known as the MAX-MIN system. (Briefly, the system is supposed to forestall the suboptimal convergences I mentioned above.) They carried out their experiments on four nodes of two (1.8 GHz) AMD Opteron 244 CPUs each, with each processor managing a colony of 25 ants. (This was hardly a supercomputer setup, but Opteron dual cores do show up in some of the machines on the TOP500 list of the time.) This was a distributed-memory platform, with processors communicating via message-passing (MPI). The software was the ACO tool ACOTSP, written in C, and the particular Traveling Salesman problems came from the TSPLIB library.

The goal of the experiment was to compare the efficiencies of various communication topologies among processors -- e.g., fully connected (every processor can communicate with every other), hypercube (the processors are at the vertices of hypercubes, and can only communicate with processors sharing an edge), and independent (no communication; each processor carries out a separate simulation and the best solution is chosen). Synchronous and asynchronous versions of each topology were tested (except for the parallel independent runs, of course, which did not communicate).

Although these parallel versions of MAX-MIN ACO found sufficiently good solutions faster than the sequential version, the parallel independent runs beat all of the other schemes, which apparently spent too much time communicating and not enough time solving. (Moreover, the asynchronous version for a given topology always beat the synchronous version, perhaps for similar reasons.) Even when the authors decreased the frequency of communication, this still didn't improve the other topology + communication mode combinations enough to match the speed of the parallel independent runs. Thus, this particular version of ACO performed best when run in an "embarrassingly parallel" way! The authors speculated that their version of the MAX-MIN system was still too likely to converge on suboptimal solutions (and to share around progress toward those solutions); independent parallel runs did not propagate these mistakes.

I imagine that using even more cores/colonies would only widen the gap between the results of parallel independent runs and communicating schemes, especially for the most interconnected topologies. Increasing the problem size would probably have a similar effect, since larger "pheromone matrices" would need to be passed around. A shared-memory system might handle larger problem sizes better.

I found it interesting that the best communication scheme, in this particular case, was to not communicate at all. Still, as the authors pointed out, a communication scheme better than independence may exist.

References used, besides those linked to above:

Dorigo, Marco, and Thomas Stutzle. Ant Colony Optimization. Cambridge, Mass.: The MIT Press, 2004.

Solnon, Christine. Ant Colony Optimization and Constraint Programming. Hoboken, N.J.: John Wiley & Sons, Inc., 2010.