1. Introduction
Advances in scalable computing and artificial intelligence have developed swarm intelligence approaches. In fact, swarm intelligence algorithms use the cooperative and group behavior of social organisms in nature.
In this tutorial, we’ll look at what this swarm intelligence means and what it does. Then, we’ll go through different real-world examples and applications of swarm intelligence algorithms.
Finally, we’ll choose an example to demonstrate the importance of the swarm intelligence approach using one of the most popular swarm intelligence-based optimization algorithms.
2. What’s Swarm Intelligence
Swarm intelligence is an artificial or natural intelligence technique. It is based on studying collective behavior in decentralized and self-organized systems. Gerardo Benny and Joon Wang introduced swarm intelligence in 1989 in the context of cellular robotics systems.
2.1. Principles of Swarm Intelligence
Let’s discuss the swarm intelligence principles:
- Awareness: Each member must be aware of their surroundings and abilities
- Autonomy: To self-coordinate, each member must operate as an autonomous master (not as a slave)
- Solidarity: When a task is completed, members should autonomously look for a new task
- Expandability: The system must permit dynamic expansion where members are seamlessly aggregated
- Resiliency: When members are removed, the system must be self-healing
2.2. Real-World Examples
We have various intelligence examples in nature such as Ant colonies, Bee beehives, Fish schooling, Bird flocking, Bacterial growth, and microbial intelligence.
Also, there are biological advantages of swarm intelligence. For example, birds steal information using up to a fifth less energy than those that fly solo. In addition, Swarm intelligence is modeled for the purpose of understanding microscopic (global) transformations. Furthermore, it allows getting ideas for artificial systems the similar proprieties.
Besides, there are two main development areas of swarm intelligence:
- Particle Swarm Optimization: One of the most well-known swarm intelligence-based optimization techniques. Particle swarm optimization was modeled by the social behavior of animals and insects. In this context, every individual swarm member is handled as a particle. Cooperation and learning enable the collective intelligence of these dispersed particles.
- Ant Colony Optimization: Based on the social instincts of actual ants for their community, the ant colony optimization approach is an essential component of swarm intelligence. This algorithm enables them to cooperate to accomplish a common objective. Since they are all gathered in one place, the ants must locate the food and carry it back to the colony.
2.3. Applications
In addition to its use in traditional optimization issues, swarm intelligence has applications in the acquisition of library items, communications, the categorization of medical datasets, dynamic control, the planning of heating systems, and the tracking and prediction of moving objects. Furthermore, swarm intelligence may be used in many different areas of basic research, engineering, business, and the social sciences.
3. Swarm Intelligence Algorithm Example
In this part, we choose to describe Particle Swarm Optimization.
Particle swarm optimization has been regarded as one of the most effective optimization approaches for decades due to its ease of use, the number of parameters that may be utilized for modification, speedy convergence, and scalability. In addition, particle swarm optimization is the first swarm-based algorithm. This algorithm allows the people in a basic living structure to join forces to create a more intelligent structure.
3.1. Particle Swarm Optimization Algorithm
Let’s see how conventional particle swarm optimization works:
algorithm ParticleSwarmOptimizationAlgorithm(N, c1, c2, w, max_iterations, x_limit, v_limit):
// INPUT
// N = number of particles
// c1, c2 = acceleration coefficients
// w = inertia weight
// max_iterations = maximum number of iterations
// x_limit = position limit for particles (min and max boundaries)
// v_limit = velocity limit for particles (min and max boundaries)
// OUTPUT
// Optimal solution found by the swarm
Initialize population P of N particles
for each particle i in P:
Initialize x[i] randomly within the x_limit
Initialize v[i] randomly within the v_limit
Initialize p_best[i] = x[i]
Initialize g_best as the best of the p_best[i] positions
for iteration in range(max_iterations):
for each particle i in P:
Calculate fitness function f_i for x[i]
if f_i is better than the fitness at p_best[i]:
p_best[i] = x[i]
if f_i is better than the fitness at g_best:
g_best = x[i]
Update v[i] using:
v[i] = w * v[i] + c1 * r1 * (p_best[i] - x[i]) + c2 * r2 * (g_best - x[i])
Update x[i] using:
x[i] = x[i] + v[i]
if x[i] > x_limit.max:
x[i] = x_limit.max
if x[i] < x_limit.min:
x[i] = x_limit.min
return g_best // Return the global best position as the optimal solution
According to the particle swarm optimization algorithm stages, there are three crucial elements that stand out: the , , particle, and fitness function parameters. The correct specification of the fitness function is a prerequisite for using meta-heuristic algorithms. Also, the iteration allows us to repeat the algorithm steps until we get the best solution. These algorithms work by transforming issues into fitness functions that display how far away the particles are from the ideal solution:
is the current velocity of the particle (Equation. 1) and is the position (Equation. 2). Within the equations. and ; is the size of the problem, and is the number of particles. Also, and present the author’s and the world’s best weights; and and are random values in the range .
Besides, the optimal solution, or , is the separation between the position of each particle and the food source. The closest distance the swarm has come to the food source in one iteration is .
Generally, these two elements influence the convergence of the particle. These two factors are crucial because they determine whether the particle will eventually reach a solution. Moreover, the number of particles is a parameter that directly affects the solution. Indeed, since the number increases, the result improves but the algorithm runs more slowly.
3.2. Particle Swarm Optimization
In particle swarm optimization, the particles move to a new position with a new velocity with each iteration, as shown below with the different number of iterations; 1, 100, 200, and 300. Therefore, each time they come closer to the goal, that is, to the solution to the problem:
The fact that optimal values can be obtained with only two parameters in the algorithm is an indication of the strength of this algorithm. For this reason, it is preferred in many different engineering problems due to its ease of application, simple structure, and efficient production results.
4. Conclusion
In this article, we discussed the swarm intelligence approach. We have used the particle swarm optimization method as a common algorithm to create a more intelligent structure. Also, we introduced this algorithm because it’s highly effective and fast in solving swarm intelligence issues.