Stochastic diffusion search

From Scholarpedia
John Mark Bishop (2007), Scholarpedia, 2(8):3101. doi:10.4249/scholarpedia.3101 revision #91827 [link to/cite this article]
Jump to: navigation, search
Post-publication activity

Curator: John Mark Bishop

Stochastic Diffusion Search is a well characterised robust swarm intelligence global metaheuristic, that can efficiently solve search and optimisation problems with compositional structure.

Contents

Introduction

In recent years there has been growing interest in a distributed mode of computation utilising interaction between simple agents, (e.g., evolutionary algorithms; particle swarm optimization; ant colony optimization etc.). Certain of these ‘swarm intelligence’ systems have been directly inspired by observing interactions between social insects, such as ants and bees. For example, algorithms inspired by the behaviour of ants - ant colony optimization algorithms - typically use the principle of communication via pheromone trails to successfully tackle hard search and optimisation problems. This indirect form of communication, based on modification of the physical properties of the environment, has been termed stigmergetic communication. The problem solving ability of these algorithms emerges from the positive feedback mechanism and spatial and temporal characteristics of the pheromone mass recruitment system they employ. However, in contrast to such stigmergetic communication mechanisms Stochastic Diffusion Search is a swarm intelligence metaheuristic (De Meyer et al, 2006), that uses a form of direct communication between agents more analogous to the ‘tandem calling’ mechanism employed by a particular species of ants called Leptothorax Acervorum.

Stochastic Diffusion Search (SDS) was first proposed by Bishop (1989) as a population based best-fit pattern matching algorithm. Such algorithms can also be recast in terms of an optimisation by defining the objective function, F(x), for a hypothesis (x) about the best-fit location of the solution, as the similarity between the target pattern and the corresponding region at position (x) in the search space and finding (x) such that F(x) is maximised.

In general, SDS can most easily be applied to optimisation problems where the objective function is decomposable into components that can be evaluated independently. In order to locate the optima of a given objective function SDS employs a swarm of n agents, each of which maintains a hypothesis, (xi), about the optima. The SDS algorithm entails iteration of TEST and DIFFUSION phases until a swarm of agents converge upon the optimum hypothesis.

Algorithm

INITIALISE (agents);

REPEAT

  TEST (agents);
  DIFFUSE (agents);

UNTIL CONVERGENCE (agents);


INITIALISE: Typically the initial hypothesis of each agent is selected uniformly randomly over the search space. However, any information about probable solutions available a priori can be used to bias the initial selection of hypotheses.

TEST: For each agent in the swarm maintaining hypothesis (xi), a Boolean ‘test-function’ returns TRUE if a randomly selected partial evaluation of the objective function at (xi) is indicative of a ‘good’ hypothesis. For example in string matching, given a hypothesis position (xi), the test function may return TRUE if the jth sub-feature of the target string is present at position (xi; j) in the search space. If the test-function returns TRUE the agent becomes an active member of the swarm, if not then it becomes a passive member.

The ‘test-score’ for a given hypothesis, (xi), is the probability that the ‘test-function’ will return true, and is hence representative of the value of the objective function, F (xi).

DIFFUSION: Hypotheses are communicated between agents through inter-agent communication. In standard SDS passive agents communicate with a randomly selected member of the swarm; if this agent is active, it communicates its hypothesis to the passive agent otherwise the passive agent re-initialises to a new hypothesis selected randomly across the entire search space.

CONVERGENCE: As iterations progress clusters of agents with the same hypothesis form until, at convergence, the largest cluster of agents defines the optimal solution. To detect convergence two possible halting criteria are (a) the ‘Strong Halting Criterion’ which, after establishing that the largest cluster of agents exceeds a minimal threshold, checks that cluster size is [stochastically] stable over a given number of iterations and (b) the ‘Weak Halting Criterion’ which simply checks the stability and minimum size of the total number of active agents, (as total activity is strongly dependent on the best solution found so far).

Parallel SDS

The SDS algorithm is inherently parallel however, in practice, running SDS as a parallel algorithm is problematic due to (a) each agent needing to communicate with all others and (b) the volume of communication between agents. One solution to this problem is to arrange agents into a lattice with direct communication only to the k-nearest neighbours; alternatively, the agent swarm may be split into multiple sub-swarms, each running on a separate processor and fully connected, with low frequency communication between swarms (De Meyer et al, 2002).

Parallel implementations of SDS typically assume concurrent agent TEST and DIFFUSION phases, although it has been shown that relaxing this requirement to independent asynchronous agent behaviour has a relatively marginal impact on overall SDS behaviour.

Recruitment Strategies

Recruitment strategies affect only the DIFFUSION phase of SDS; the rest of the algorithm remains unchanged. The three recruitment strategies that have been investigated to date are: passive recruitment (the standard mechanism); active recruitment and dual recruitment.

As outlined in the standard algorithm description above, in passive recruitment passive agents select active agents for possible communication of solutions; each passive agent A randomly select another agent B to communicate with, and if B is active then B’s hypothesis is communicated to A.

Active recruitment is modelled on the behaviour of swarming insect species that will actively try and recruit others to a food source or nest site. For instance, in some species (e.g. Leptothorax Acervorum) ants will perform tandem calling, where an ant that has found food will iteratively attempt to recruit others to the source. Similarly, honey bees will perform a waggle dance within the hive in order to indicate to other bees the location of promising food sources. In active recruitment during the diffusion phase, active agents find passive agents to communicate their hypothesis; each active agent A randomly contacts another agent B and if B is passive then B is recruited by A (A’s hypothesis is communicated to B). Clearly, in contrast with passive recruitment, where theoretically the optimal cluster can increase by any amount in a single iteration, for active recruitment the cluster size can only, at maximum, double per iteration (if every active agent selects an passive agent).

In dual recruitment, both active and passive recruitment mechanisms operate simultaneously. Therefore, both active and passive agents make selections of agents and can cause the transfer of hypotheses from active to passive agents. This mix of recruitment mechanisms in a system is considered to be the most biologically plausible and is therefore of special interest. NB. In practice, dual recruitment can cause a conflict in hypothesis assignment, and consequently priority must be given to either an active agent assigning a hypothesis to a passive agent (active priority) or an passive agent copying a hypothesis from an active agent (passive priority).

Manipulating the resource allocation process

(a): shifting the balance of the search towards local exploration

The standard SDS has no mechanism to exploit self-similarity in the objective function - a regularity exhibited by many real-world problems: namely the fact that nearby solutions in the solution space often have similar objective function values. However, a mechanism introducing small variations on the diversity of hypotheses already present in the swarm can be easily incorporated into the algorithm. One possibility is to perturb the copying of hypotheses parameters by adding a small random offset during replication of a hypothesis in the diffusion phase, much like mutation in evolutionary algorithms. The effect thereof is to smear out large clusters of agents over neighbouring locations in the solution space. It allows the SDS process to implicitly perform hill-climbing - resulting in improved convergence times in solution spaces with self-similarity - as well as tracking of moving peaks in nonstationary or dynamic problems.

(b): shifting the balance of the search towards global exploration

The conflicting demands of a continued wide exploration of the solution space - especially in dynamic environments - versus the need for a stable cluster exploiting the best solution discovered so far, are not necessarily satisfied in the most optimal way by standard SDS. Its allocation process is greedy: once a good solution is detected, a large proportion of the swarm is allocated towards its exploitation, making these agents unavailable for further exploration. A mechanism that frees up some of these resources without significantly affecting the stability of agent clusters would increase the efficiency of SDS for many types of problem, particularly in dynamic optimisation. One such mechanism is context-sensitive SDS.

Context sensitive SDS differs only from standard SDS in its DIFFUSION phase for active agents. In standard SDS active agents always maintain their current hypothesis. In context sensitive SDS each active agent selects another agent at random; if the selected agent is active and supports the same hypothesis, then the selecting agent becomes passive and picks a new random hypothesis from the search space. This mechanism self-regulates against the formation of very large clusters as the probability that two active agents with the same hypothesis communicate during the diffusion phase increases with relative cluster size. This introduces a mechanism of negative selection or negative feedback to the original algorithm and for some test scores it also enables SDS to maintain relatively large clusters on multiple similar, near-optimal, solutions.

Analysis

Since first publication in 1989 a large body of work has emerged analysing the behaviour of SDS such that it has become one of the best characterised metaheuristics. Some important results to date are:

  • 'Ehrenfest urn' characterisation of steady state behaviour of SDS (Nasuto, 1999).
  • Proof of convergence to global optimum in both noiseless and a variety of noisy search spaces, in linear (or sub-linear) time using 'Markov chain' models, (Bishop & Torr, 1992; Nasuto, Bishop & Lauria, 1998; Nasuto and Bishop, 1998).
  • Simple approximation of SDS convergence behaviour using 'dynamic systems' theory (Myatt et al., 2004).
  • Analysis of different recruitment mechanisms (Myatt et al, 2006).

Applications

Stochastic Diffusion Search has been applied to many diverse search and optimisation problems such as site selection for wireless networks, sequence detection in bio-informatics, mobile robot self-localisation, object recognition, eye tracking, lip tracking and text search.

Further information on SDS including an on-line paper repository can be found at: http://www.cirg.reading.ac.uk/SDP/index.htm

References

  • Bishop, J.M., (1989), Stochastic searching networks, in 1st IEE Conf. on ANNs, pp. 329-331, London UK.
  • Bishop, J.M. & Torr, P., (1992), The Stochastic Search Network, in Lingard, R., Myers, D.J. & Nightingale, C. (eds), Neural Networks for Vision, Speech & Natural Language, pp. 370-388, Chapman Hall.
  • De Meyer, K., Bishop, J.M., Nasuto, S.J., (2002), Small world effects in Lattice Stochastic Diffusion Search, Proc. ICANN 2002, in Lecture Notes in Computer Science, 2415, pp. 147-152, Madrid, Spain.
  • De Meyer, K., Nasuto, S.J. & Bishop, J.M., (2006), Stochastic Diffusion Optimisation: the application of partial function evaluation and stochastic recruitment in Swarm Intelligence optimisation, Volume 2, Chapter 12 in Abraham, A., Grosam, C., & Ramos, V. (eds), (2006), Swarm intelligence and data mining, Springer Verlag.
  • Myatt, D.M., Bishop, J.M., Nasuto, S.J., (2004), Minimum stable convergence criteria for Stochastic Diffusion Search, Electronics Letters, 22:40, pp. 112-113.
  • Myatt, D.R., Nasuto, S.J. & Bishop, J.M., (2006), Alternative recruitment strategies for Stochastic Diffusion Search, Artificial Life X, Bloomington USA.
  • Nasuto, S.J., (1999). Resource Allocation Analysis of the Stochastic Diffusion Search, PhD Thesis, University of Reading, UK.
  • Nasuto, S.J., Bishop, J.M. & Lauria, S., (1998), Time complexity analysis of the Stochastic Diffusion Search, Neural Computation ‘98, Vienna, Austria.
  • Nasuto, S.J., & Bishop, J.M., (1999), Convergence of the Stochastic Diffusion Search, Parallel Algorithms, 14:2, pp: 89-107.

Internal references

See Also

Swarm Intelligence

Personal tools
Namespaces

Variants
Actions
Navigation
Focal areas
Activity
Tools