Open access peer-reviewed article

Particle Swarm Optimization with a Simplex Strategy to Avoid Getting Stuck on Local Optimum

Vilmar Steffen

This Article is part of Numerical Analysis and Scientific Computing Section

Article metrics overview

3014 Article Downloads

Article Type: Research Paper

Date of acceptance: September 2022

Date of publication: October 2022

DoI: https://doi.org/10.5772/acrt.11

copyright: ©2022 The Author(s), Licensee IntechOpen, License: CC BY 4.0

Download for free

Table of contents


Introduction
Literature review
Proposed new algorithm
Numerical simulations methodology
Numerical simulation results and discussion
Conclusions
Table B.1
Conflict of interest
Acknowledgement

Abstract

Heuristic methods, for global optimization, have been receiving much interest in the last years, among which Particle Swarm Optimization (PSO) algorithm can be highlighted. However, the application of heuristic methods can lead to premature convergence. In this work, the addition of a step on the PSO algorithm is proposed. This new step, based in Nelder–Mead simplex search method (NM), consists of repositioning the current particle with global best solution, not for a better position, but away from the current nearest local optimum, to avoid getting stuck on this local optimum. There are other PSO-NM algorithms, but the one we are proposing, has a different strategy. The proposed algorithm was also tested with the repositioning strategy in other particles beyond the current global best particle, depending on the repositioning probability. To evaluate the effectiveness of the proposed methods, and study its better parameters, were used various test functions, and for each test function, various number of particles were used in combination with various probabilities of particles repositioning. A thousand runs were performed for each case, resulting in more than two millions runs. The computational studies showed that the repositioning of of global best particle increases the percentage of success on reaching the global best solution, but better results can be obtained applying the repositioning strategy to other particles with repositioning probabilities between 1–5%.

Keywords

  • particle swarm optimization

  • Nelder–Mead simplex search

  • unconstrained optimization

  • hybrid optimization method

  • global optimum

Author information

Introduction

Function minimization (or maximization) is extensively employed in various science fields. It refers to determination of the decision variables of a function, so that the function would be at its minimum (or maximum) value. A majority of problems, especially engineering ones, are optimization problems (function minimization or maximization) in which the decision variables should be determined in a way, that the systems will operate at their best operation points in relation to a specific objective [1]. Since most engineering and other science fields problems are non-linear, complicated and with various local optima, it is necessary to use methods with good ability of finding global optima. No method guarantees convergence to the global optimum, but recently a lot of heuristic methods have been proposed, and this class of methods present great potential in obtaining the global optimum.

The heuristic optimization methods are mostly inspired by nature as evolutionary, physical based, or based on animals or human behavior. The common characteristic among these models is that a global search is executed using movements of the search points, which is driven by a vector quantity given by its dynamic system, such as the distance to the best position already reached by all points [2]. The various methods include Simulated Annealing [3], Differential Evolution [4], Artificial Chemical Reaction Optimization Algorithm [5], Water Evaporation Optimization Algorithm [6], Lightning Search Algorithm [7], Lightning Attachment Procedure Optimization [1], Gravitational Search Algorithm [8], Black Hole Algorithm [9], Particle Swarm Optimization [10], Multi-Particle Collision Algorithm [11], Firefly Algorithm [12], Crow search algorithm [13], Whale Optimization Algorithm [14], Monkey Search Algorithm [15], Bat-inspired Algorithm [16], Artificial Bee Colony algorithm [17], Grey Wolf Optimization algorithm [18], Fruit Fly algorithm [19], Dragonfly algorithm [20], Social Spider Optimization [21], Cuckoo Optimization Algorithm [22], Lion Optimization Algorithm [23], Tabu Search [24, 25], League Championship Algorithm [26], Soccer League Competition Algorithm [27], Fireworks Algorithm [28], Colliding Bodies Optimization [29], Tug of War Optimization [30], Thermal Exchange Optimization [31], Ions Motion Algorithm [32], Pathfinder Algorithm [33], Black Widow Optimization Algorithm [34], Turbulent Flow of Water-based Optimization [35], and many others.

Immature convergence and stagnation at local minimum are some common deficiencies in the majority of heuristic optimization methods. Hence, researchers are often concerned to improve them through modifying previous optimizers, for example, modification on: Grey Wolf Optimization algorithm [36], Bat Algorithm [37], Particle Swarm Optimization [3843], Firefly Algorithm [44], Fruit Fly Algorithm [45], Ant Colony Algorithm [46], Artificial Bee Colony [4749], and so on.

It is difficult to predict the best algorithm for every optimization problem. However, a hybrid method of different optimization algorithms could be a potential solution and more efficient than using one single algorithm for solving complex problems [50]. Hybridization of heuristic algorithms with other heuristic or deterministic algorithms has been an active research area in recent years. For example, Particle Swarm Optimization and Simulated Annealing [51], Particle Swarm Optimization and Genetic Algorithm [52, 53], Particle Swarm Optimization and Support Vector Machines [54], Particle Swarm Optimization, Ant Colony Optimization and 3-Opt algorithms [55], Artificial Bee Colony Algorithm and Differential Evolution [56], Gravitational Search Algorithm and Genetic Algorithm [57], Extended Random Search and Conjugate Gradient Method [58], Scatter Search and Nelder–Mead [59], Genetic Algorithm and Newton Method [60], Simulated Annealing and Genetic Algorithm [61], Simulated Annealing and Tabu Search Algorithm [62], Particle Swarm Optimization, Spider Monkey Optimization and Ageist Spider Monkey Optimization algorithms [63], Whale Optimization Algorithm, Lévy Flight and Differential Evolution [64], Particle Swarm Optimization and Gravitational Search Algorithm [65], etc. Other hybrid methods are obtained by combining, in many ways, the Nelder–Mead simplex search (NM) and Particle Swarm Optimization (PSO), proposed respectively by Nelder and Mead [66] and Kennedy and Eberhart [10]. These methods are commonly called PSO-NM [6772].

The focus in this work is to use a hybrid PSO-NM method for solving unconstrained optimization problems. The idea is to add a step in PSO algorithm where the particle with current global best value is repositioned, by a simplex methodology (the simplex is formed by the current global best and other particles) taking it away from the current nearest local minimum. This aims to avoid the global best particle from getting stuck in a local minimum, resulting in a premature convergence. This repositioning strategy is applied to another particles beyond the global best. The effectiveness of the proposed method is evaluated by computational studies comparing the rate of successful on reaching the global minimum for many test functions.

Literature review

Over the past few decades, heuristic algorithms have been increasingly popular in dealing with challenging optimization problems in all kinds of engineering fields. This is because such techniques are more inexpensive and efficient than conventional numerical approaches. The merits of heuristics lie in many aspects. The first is their randomness, which can ensure success in avoiding local extrema and exploring the search space. The second is the black box concept, in which the input and output of considered problems are used without the need of gradient information. In addition, heuristic algorithms are easy to implement and their mathematical models are simple [73].

Heuristic methods have gained popularity. They are approximate non-deterministic optimization techniques that draw on specialized knowledge to address a particular issue. Then, from heuristics, meta-heuristic algorithms emerged. They trended and appeared as an attempt to acquire strategies for problem-independent decision making. Meta-heuristic algorithms are created to address a variety of optimization problems by directing the search process and trying to explore the search area fully. They have a greater level of abstraction than heuristics, allowing for the incorporation and management of many heuristics as well as the integration of methods for escaping from local optima to reach the global optimum [74]. The advantages and disadvantages of various meta-heuristic algorithms are mentioned in brief.

Evolutionary Programming (EP) is simple to implement and suitable for many problems, but there is an uncertainty on determining the best resolution. Differential Evolution (DE) does not become stuck in a local minimum but the tuning parameters need improvement. Genetic Algorithm (GA) has good coverage of initial solutions in the search space, but the convergence is slow [74]. Another advantage of GA over traditional optimization is its treatment of more practical, dynamic, and highly nonlinear problems.

Bio-inspired optimization (BIO) easily copes with failure and consequently, real-world engineering challenges. The BIO system always finds the best solution. However, broadening the range and development of bioinspired methods for analysing new areas of application is necessary to overcome one of its drawbacks, which is the lack of balance amongst its elements [7476].

Jaya Algorithm is a human-based and can be used to solve numerous optimization challenges. However, Jaya algorithm might be stuck in local optima when trying to address complex optimization issues because of minimal population information in its only learning strategy [77].

Teaching learning-based optimization (TLBO) has the capacity to balances global search capability with convergence rate and the ability for local and global search, but the exploration process needs to be improved. Social evolution and learning optimization (SELO) is effective in finding global optimum solutions for unconstrained problems, however has a relatively slow convergence speed. Gravitational search algorithm (GSA) can explore the local solution and is simple to implement, nonetheless, has a poor convergence rate and high computational time [74]. Although GSA surpasses traditional PSO and GA approaches in many issues, there are still some shortcomings such as exploration–exploitation discrepancy and local convergence [78].

Water Cycle Algorithm (WCA) is an effective population based optimization algorithm, but the algorithm still resorts to premature convergence or gets stuck in local optima in attempting to solve problems and requires a substantial calculation time [78, 79]. An evaluation of several tests provides varying levels of complexity that confirm the effectiveness of the electromagnetism optimization (EMO) technique regarding accuracy, speed and consistency [21, 74]. Multiverse Optimizer Algorithm (MVO) requires less effort in computation, however the algorithm gets stuck in the local optimal solution [74]. Sine Cosine Algorithm (SCA) solves a large variety of optimization challenges, has the ability to look into diverse areas of a search space, avoiding local optima, converging towards global optimization and searching for promising regions on the search space [74, 80].

By combining the two phases (fusion and fission), Nuclear Reaction Optimization (NRO) maintains a balance between its exploration and exploitation abilities. The tests indicated that this optimization method is a potentially powerful and an efficient global optimization algorithm [74].

Among heuristic intelligent optimization algorithms, swarm heuristic intelligence optimization algorithms have been widely used to solve global optimization problems due to simplicity, flexibility, and high efficiency. By introducing randomness in the optimization process, swarm intelligence optimization algorithms can determine the global optimal solution accurately and reasonably, which makes the solution of swarm intelligence optimization algorithm have practical significance [81]. Among heuristic intelligence optimization, Particle Swarm Optimization (PSO) is one of the most widely used; it is a simple, easy to implement, effective but can get stuck on local minimum and is unsatisfactory in solving multi-objective optimization problems [74].

Although several novel metaheuristics and solvers have been published and made available, the need to develop robust and intelligent systems is the need of the hour. This can be achieved by combining the known advantages of conventional methods with novel strategies or designing hybrid algorithms [82].

PSO-NM hybrid methods

The PSO-NM method proposed by Liu and Yang [70] uses the NM search method to improve the efficiency of PSO by increasing the convergence rate. This method, in each iteration, applies NM search method with a simplex composed by each particle and other n particles with best fitness value. Following this, all particles will be closer to the nearest local optimum. To justify the methodology proposed, the authors says that PSO method resists easily falling into local optimum, what is not entirely true, as will became clear later in this work. The method proposed by Liu and Yang [70] is a modification of the method proposed by Zahara and Hu [69], in which the NM method is applied in just one simplex, for each iteration, composed by n + 1 particles with best fitness value to improve the position of (n + 1)th particle, and other particles position are adjusted by PSO by taking in account the position of all particles used in the NM method.

Fan et al. [67] proposed a hybridization strategy of combining NM and PSO methods in a way to accelerate convergence. For an n dimensional problem, the procedure proposed by the authors uses 3 n + 1 particles, the particles are sorted by fitness, and the best n particles are saved for subsequent use. The top n + 1 particles are fed into the modified simplex search method to improve the (n + 1)th particle. Joined by the n best particles and the (n + 1)th particle, the last 2 n particles are adjusted by the modified PSO method.

The PSO-NM hybrid approach proposed by Hsu and Gao [68] aims to improve both convergence rate and accuracy of the proposed optimization algorithm, using an enhanced NM method to improve Gbest position and the help of a center particle, because the center particle is generally closer to the optimum than Gbest during the search. Due to frequent appearance as the best particle of swarm, it often attracts other particles and guides the search direction of the whole swarm, improving the convergence rate.

A PSO-NM method for crack detection in cantilever beams was proposed by Vakil Baghmisheh et al. [71], and used by Mesbahi et al. [72] in order to obtain the optimal parameters for Li-ion batteries model for vehicles. The hybrid PSO-NM is made-up of a modified particle swarm optimization algorithm (PSO), aimed at identifying the most promising areas, and a NM simplex algorithm for performing local search within these areas. For the modification of PSO method, a mutation operator is incorporated, in each iteration two randomly selected particles are deleted and two new particles are introduced and positioned randomly in the search space. In addition, to increase the precision of the answer the authors added the NM local search algorithm after the PSO.

Proposed new algorithm

The optimization techniques can be classified into two categories: deterministic (e.g., NM simplex search method, Steepest descent, Powell’s conjugate direction method, Rosenbrock method, etc.) and heuristic (e.g., PSO, simulated annealing algorithm, genetic algorithm, firefly algorithm, etc.). The advantage of deterministic methods is the lower computational time required, but for heuristic methods the probability of reaching the global minimum is greater. So the idea of this work is to propose an improvement of PSO method by using a repositioning methodology, of some particles, using the simplex approach proposed by Nelder and Mead [66], to avoid premature convergence provoked by the particles stuck in a local minimum, i. e., an hybridization strategy, to further increase the probability of global optimization success. The NM, PSO and PSO-NM (that we propose) methods are presented below.

Particle swarm optimization (PSO)

The Particle Swarm Optimization algorithm is based on the movement of organisms in a bird flock or fish school motivated by social exchange of information. Birds and fish adjust their physical moment to avoid predator, seek food, mate, optimize environmental parameters as temperature, etc. Humans adjust not only physical movement but cognitive or experiential variable as well [10]. So, the idea of this method is to randomly generate a swarm of m points (called “particles”), in the search region of dimension n, with initial random velocities. The function value of each particle are evaluated at each iteration. These function values are used to update particles velocities. It can be used to solve unconstrained as well as constrained optimization problems [83].

Figure 1.

PSO algorithm.

The procedure of PSO algorithm, illustrated by the figure 1, is composed of the following steps:

  1. Positions initialization: randomly generate the initial positions (Xi) of m particles, within the limits of the n-dimensional search region. Considering that the search region has an upper bound, Xupper, and a lower bound, Xlower, a possible way to initialize the position is:

    where rand() is a n-dimensional vector of random numbers between 0 and 1.
  2. Velocities initialization: randomly generate the initial velocities (Vi) of m particles. It is not recommended that initial velocities have large values, so a possible way is to initialize as zero for all particles, another way is using the equation (2);

  3. Evaluate objective function: for each current particle position, it is also called fitness (objective function value, Fobj);

  4. Identify the best locations: for each particle, compare current objective function value with its best value reached ever, so, if current value is better than previous one, attribute particles best location (Pbest) is equal to current value. Identify the best value of objective function obtained with the current particles position, and compare it with previous global best location (Gbest), if the current value is better, update it;

  5. Update particles velocities: the particles velocities are updated by equation (3);

    where, w is the inertia weight, C1 is the cognitive parameter and C2 is the social parameter. Generally C1 = C2 = 2 and the inertia weight can be calculated by many equations, among which, the one commonly used is represented by equation (4) [84];
  6. Update particles positions: the particles positions are updated by applying equation (5);

  7. Stop condition: if stopping criteria was reached, go to step (8), otherwise return to step (3); In this work we used two stopping criteria, the iterative process is interrupted when one of then is satisfied. The stopping criteria are:

    1. A large number of consecutive iterations (5000) with no changes larger than 1 × 10−7 in the value of objective function evaluated for Gbest;

    2. All particles are very close to the Gbest:

  8. Print results: print the position of global best location and its fitness as the minimum reached.

Nelder–Mead simplex search method (NM)

The idea of simplex method for function minimization of n variables without constraints, proposed by Nelder and Mead [66] based on an ingenious idea introduced by Spendley et al. [85], by comparing the function values at n + 1 vertices of a general simplex, followed by the replacement of the vertex with the highest objective function value by another point. This replacement is made until finding an approximation of local minimum.

In this method, initially, n + 1 points, i.e., P0, P1, P2, … , Pn are considered with function values, y0, y1, y2, … , yn respectively. So, the point with highest function value is defined as yh = max(yi) at Ph, the point with lowest function value as yl = min(yi) at Pl and the centroid, Pcent, of all points with i ≠ h. At each iteration a new simplex is formed by substituting Ph by reflection, contraction or expansion (these are illustrated in figure 2). The first operation consists in choosing a point, with lower function value than yh, away from Ph through Pcent by applying reflection equation (6)

where the reflection coefficient, 𝛼, is a positive constant (usually 𝛼 = 1).

Figure 2.

Nelder–Mead simplex substitution of Ph.

If the function value of Pref is lower than yl, i.e., the reflection resulted in a new minimum, so it is possible to obtain a function value even lower by applying expansion equation (7).

where the expansion coefficient, 𝛾, is a positive coefficient greater than unity (usually 𝛾 = 2).

If the function value at Pexp is lower than the function value at Pl, the new simplex is formed by replacing Ph by Pexp, otherwise Ph is replaced by Pref.

If a point, obtained on reflection operation, with a function value of Pref lower than yh and at least another one yi, Ph is replaced by Pref, otherwise if the fitness of Pref is higher than yi for all i ≠ h, it has obtained a new maximum. In this case, it is necessary to try taking a better new point by contraction, what is done by applying equation (8).

where the contraction coefficient, 𝛽, lies between 0 and 1 (usually 𝛽 = 1∕2). The value of Pcont is accepted unless the function value at this point remains higher than the values at Ph. In case the contraction fails, the simplex vertices are substituted applying equation (9).

PSO method with particle repositioning based on NM simplex (PSO-NM)

The PSO method we propose has a different purpose from all PSO-NM methods presented before. Starting from the possibility of one particle, at the beginning of optimization process, gets closer to a local minimum, this can attract other particles to this local minimum, and lead the algorithm to converge to this value, in other words, a premature convergence. An attempt to avoid getting stuck in a local minimum, we propose in this work, is using the reflection of NM simplex search method for the particle with global best fitness ever reached. As illustrated in figure 3, this reflection, of current global best particle, is done through a center obtained by n particles after (randomly chosen). This reflection operation takes the particle to a position away from current nearest local minimum, aiming to avoid getting stuck on local optimum.

Figure 3.

Repositioning of particle with current global best location.

But the repositioning of just one particle cannot be satisfactory, aiming to fix this, we propose that this repositioning operation by NM reflection can occur, depending on a probability of particles repositioning, in other particles beyond that with global best fitness. This additional of repositioning, can improve the ability of avoiding premature convergence, besides that can aid better exploration of the search domain. This procedures are done after the update of positions (step 6) of PSO algorithm, as shown in figure 4.

Figure 4.

Proposed PSO-NM algorithm.

Numerical simulations methodology

The numerical simulations were implemented through two codes in ForTran. In the first code the PSO method was implemented, as presented in subsection 3.1, to obtain the minimal function value, and the possibility to use any number of particles. The proposed PSO-NM heuristic optimization method was implemented, as presented in subsection 3.3, in a way that the user can choose the number of particles and the probability of particles repositioning.

For both methods, we used simultaneously two stopping criteria, the first one verifies if, for too many iterations (5000), the value change in the global best fitness is less than the tolerance (tol =  1 × 10−7), and the last one verifies if all particles are so close to current global best (a distance less than the tolerance, tol =  1 × 10−7). If at least one of these stopping criteria is reached, the optimization method terminated successfully.

To make the comparison of effectiveness between PSO and the proposed PSO-NM method, we used various test functions (presented in Appendix A, where most of them have various local minima), where for each test function, were used various numbers of particles (5, 7, 10, 15, 20, 30, 40, 50, 100, 200, 300 and 500) and, for the PSO-NM method, were applied various probabilities of particles repositioning (0, 1, 2, 3, 4, 5, 7 and 10%) beyond the current particle with best position in the search domain.

Percentage of global minimization

The optimization task for each case, on each test function, was performed 1000 times. After the convergence, the result obtained in each test was compared to the global optimum (the global optimum is known for all test functions used), so it was possible to evaluate the percentage of global optimization.

Evaluating the percentage of particles repositioning percentage

In order to evaluate the effect of the particles repositioning in the particles positions throughout the iterations, the function value mean (of all 1000 performed tests for each case) at global best position was calculated to make a comparison between the PSO and the proposed PSO-NM with various probabilities of particles repositioning. These tests were performed for various numbers of particles. Furthermore, for some simulations, the positions of all particles were stored to make a visual comparison between the methods.

Numerical simulation results and discussion

In this section, we present the comparison of effectiveness between PSO and the proposed PSO-NM method. The rate of successful optimization, for all cases and test functions studied in this work, are presented in Appendix B.

Shown in figures 5, 6, 7 and 8 are the percentages of function minimization with respect to particle numbers, for Ackley’s function (2-D), Griewank function (2-D), Rastrigin function (3-D) and Rastrigin function (4-D), respectively. It can be seen in these figures, as expected, the larger number of particles, the greater the percentage of function minimization, but the particles repositioning presents benefits only for percentages up to 5%, percentages of repositioning greater than 5% leads to poor performance. The large amount of results presented in B makes this clearer, although this is not true for all test functions, in few cases 10% of particles repositioning probability it was obtained best results.

Figure 5.

Percentage of success in minimization of Ackley’s function (2-D). A comparison of PSO and PSO-NM with varying probabilities of repositioning.

Figure 6.

Percentage of success in minimization of Griewank function (2-D). A comparison of PSO and PSO-NM with varying probabilities of repositioning.

Figure 7.

Percentage of success in minimization of Rastrigin function (3-D). A comparison of PSO and PSO-NM with varying probabilities of repositioning.

Figure 8.

Percentage of success in minimization of Rastrigin function (4-D). A comparison of PSO and PSO-NM with varying probabilities of repositioning.

Figures 9, 10, 11 and 12 show the particle positions, in the search region, at some iterations for the PSO and proposed PSO-NM methods, for the Griewank function (2-D), with global minimum at X = (0,  0). In these figures, it is clear that the particle repositioning causes a greater exploration of search region, which explains the reason of the higher percentage of success in obtaining the global minimum. Also, can be seen in figures 9 and 10 the particles getting stuck on a local minimum, and particles stay there, at least until the iteration of number 500, but in the figures 11 and 12, the global best particles move away from the local minimum just after 20 iterations. These facts reinforces the idea that the proposed methods is efficient in avoiding the PSO method of getting stuck in a local minimum, but the repositioning of just the particle with the current global best position is not good enough.

Figure 9.

Particles positions, at some iterations, for PSO method with 50 particles applied to Griewank function (2-D). A black triangle (▴) for current global best and gray circle (∙) for particles.

Figure 10.

Particles positions, at some iterations, for PSO-NM method with 50 particles and 0% of particles repositioning applied to Griewank function (2-D). A black triangle (▴) for current global best and gray circle (∙) for particles.

Figure 11.

Particles positions, at some iterations, for PSO-NM method with 50 particles and 3% of particles repositioning applied to Griewank function (2-D). A black triangle (▴) for current global best and gray circle (∙) for particles.

Figure 12.

Particles positions, at some iterations, for PSO-NM method with 50 particles and 10% of particles repositioning applied to Griewank function (2-D). A black triangle (▴) for current global best and gray circle (∙) for particles.

Figure 13.

Progress, in initial iterations, of mean function values at G best position, for PSO and PSO-NM methods with 50 particles and various probabilities of particles repositioning applied to Griewank function (2-D).

Figure 14.

Progress of mean function values at G best position, for PSO and PSO-NM methods with 50 particles and various probabilities of particles repositioning applied to Griewank function (2-D).

Figures 13 and 14 depicts the mean of Griewank function (2-D) value at global best position ever reached. Figure 13 shows the behavior for the first 200 iterations and figure 14 shows the behavior of mean function values until 5000 iterations. Initially, PSO method converges faster than PSO-NM, but analyzing the figure 13, it can be seen that the particles get stuck at a local minimum, and a higher probability of particles repositioning easily to escape from a local minimum. However, using 10% of particles repositioning presented some difficulty on the convergence process even after escaping from the local minimum. Again, it was clear that the proposed PSO-NM method presents advantages, but the percentage of particles repositioning should not be so high.

Conclusions

In this work we studied the effect of adding a step on PSO algorithm regarding to the percentage of success on reaching the global minimum of functions with several local minima. In this new step, the repositioning of the global best and other particles was performed, depending on a repositioning probability, which aims to avoid the global best particle getting stuck in a local minimum. Several simulations were carried out by computational studies, which demonstrated that this new step works very well. The repositioning of particle of global best solution increases the percentage of success on reaching the global best solution, but better results can be obtained applying the repositioning strategy to other particles with repositioning probabilities up to 5%. Repositioning probabilities greater than 5% should be avoided because, in various cases, this presented worse results. The strategy, which we proposed to avoid getting stuck on local optimum is very simple and can be easily adapted to other heuristic optimization methods.

The test functions employed in this work are given below:

  • Function 1 (2-D): Rosenbrock function

    1. Global optimum at X = (1.0,  1.0) with f (X) = 0.0

    2. Function evaluated in the range xi ∈ [−100,  100] ∀ i

  • Function 2 (2-D): Schaffer function no 2

    1. Global optimum at X = (0.0,  0.0) with f (X) = 0.0

    2. Function evaluated in the range xi ∈ [−100,  100] ∀ i

  • Function 3 (2-D): Schaffer function no 4

    1. Global optimum at X = (0.0,  1.25313) with f (X) = 0.292579

    2. Function evaluated in the range xi ∈ [−100,  100] ∀ i

  • Function 4 (2-D): Eggholder function

    1. Global optimum at X = (512.0,  404.2319) with f (X) = −959.6407

    2. Function evaluated in the range xi ∈ [−512,  512] ∀ i

  • Function 5 (2-D): Hölder table function

    1. Global optimum at X = (±8.05502,  ±9.66459) with f (X) = −19.2085

    2. Function evaluated in the range xi ∈ [−100,  100] ∀ i

  • Function 6 (2-D): Ackley’s function

    1. + e + 20.0

    2. Global optimum at X = (0.0,  0.0) with f (X) = 0.0

    3. Function evaluated in the range xi ∈ [−500,  500] ∀ i

  • Function 7 (2-D): Beale’s function

    1. Global optimum at X = (3.0,  0.5) with f (X) = 0.0

    2. Function evaluated in the range xi ∈ [−500,  500] ∀ i

  • Function 8 (2-D): Goldstein-Price function

    1. Global optimum at X = (0.0,  −1.0) with f (X) = 3.0

    2. Function evaluated in the range xi ∈ [−500,  500] ∀ i

  • Function 9 (2-D): Levy function no 13

    1. f (X) = sin2(3 𝜋 x1) + (x1 − 1.0)2 (1.0 + sin2(3 𝜋 x2)) + (x2 − 1.0)2 (1.0 + sin2(2 𝜋 x2))

    2. Global optimum at X = (1.0,  1.0) with f (X) = 0.0

    3. Function evaluated in the range xi ∈ [−500,  500] ∀ i

  • Function 10 (2-D): Six-Hump Camel function

    1. Global optimum at X = ±(0.0898,  −0.7126) with f (X) = −1.0316

    2. Function evaluated in the range xi ∈ [−500,  500] ∀ i

  • Function 11 (2-D): Bukin function no 6

    1. Global optimum at X = (−10.0,  1.0) with f (X) = 0.0

    2. Function evaluated in the range x1 ∈ [−15,  −5] and x2 ∈ [−3,  −3]

  • Function 12 (2-D): Cross-in-Tray

    1. Global optimum at X = (±1.3491,  ±1.3491) with f (X) = −2.06261

    2. Function evaluated in the range xi ∈ [−100,  100] ∀ i

  • Function 13 (2-D): Drop Wave function

    1. Global optimum at X = (0.0,  0.0) with f (X) = −1.0

    2. Function evaluated in the range xi ∈ [−500,  500] ∀ i

  • Function 14 (2-D): Rastrigin function

    1. with n = 2

    2. Global optimum at X = (0.0,  0.0) with f (X) = 0.0

    3. Function evaluated in the range xi ∈ [−500,  500] ∀ i

  • Function 15 (2-D): Griewank function

    1. with n = 2

    2. Global optimum at X = (0.0,  0.0) with f (X) = 0.0

    3. Function evaluated in the range xi ∈ [−600,  600] ∀ i

  • Function 16 (2-D): Schwefel function

    1. with n = 2

    2. Global optimum at X = (420.9687,  420.9687) with f (X) = 0.0

    3. Function evaluated in the range xi ∈ [−500,  500] ∀ i

  • Function 17 (3-D): Rastrigin function

    1. with n = 3

    2. Global optimum at X = (0.0,  0.0,  0.0) with f (X) = 0.0

    3. Function evaluated in the range xi ∈ [−500,  500] ∀ i

  • Function 18 (3-D): Griewank function

    1. with n = 3

    2. Global optimum at X = (0.0,  0.0,  0.0) with f (X) = 0.0

    3. Function evaluated in the range xi ∈ [−600,  600] ∀ i

  • Function 19 (3-D): Schwefel function

    1. with n = 3

    2. Global optimum at X = (420.9687,  420.9687,  420.9687) with f (X) = 0.0

    3. Function evaluated in the range xi ∈ [−500,  500] ∀ i

  • Function 20 (4-D): Rosenbrock function

    1. Global optimum at X = (1.0,  1.0,  1.0,  1.0) with f (X) = 0.0

    2. Function evaluated in the range xi ∈ [−100,  100] ∀ i

  • Function 21 (4-D): Wood Function

    1. Global optimum at X = (1.0,  1.0,  1.0,  1.0) with f (X) = 0.0

    2. Function evaluated in the range xi ∈ [−100,  100] ∀ i

  • Function 22 (4-D): Schwefel function

    1. with n = 4

    2. Global optimum at X = (420.9687,  420.9687,  420.9687,  420.9687) with f (X) = 0.0

    3. Function evaluated in the range xi ∈ [−500,  500] ∀ i

  • Function 23 (4-D): Rastrigin function

    1. with n = 4

    2. Global optimum at X = (0.0,  0.0,  0.0,  0.0) with f (X) = 0.0

    3. Function evaluated in the range xi ∈ [−500,  500] ∀ i

  • Function 24 (4-D): Griewank function

    1. with n = 4

    2. Global optimum at X = (0.0,  0.0,  0.0,  0.0) with f (X) = 0.0

    3. Function evaluated in the range xi ∈ [−600,  600] ∀ i

The percentage of success on obtaining the global minimum for all test functions employed in this work are given in Tables B.1B.24.

nPSOPSO-NM with probability of repositioning
0% 1% 2% 3% 4% 5% 7% 10%
539.999.9100.0100.0100.099.095.767.718.6
7100.0100.0100.0100.0100.099.999.590.140.8
10100.0100.0100.0100.0100.0100.099.998.566.9
15100.0100.0100.0100.0100.0100.0100.0100.090.0
20100.0100.0100.0100.0100.0100.0100.0100.096.3
30100.0100.0100.0100.0100.0100.0100.0100.099.1
40100.0100.0100.0100.0100.0100.0100.0100.099.5
50100.0100.0100.0100.0100.0100.0100.0100.0100.0
100100.0100.0100.0100.0100.0100.0100.0100.0100.0
200100.0100.0100.0100.0100.0100.0100.0100.0100.0
300100.0100.0100.0100.0100.0100.0100.0100.0100.0
500100.0100.0100.0100.0100.0100.0100.0100.0100.0

Table B.1

Percentage of success in minimization of Rosenbrock function (2-D). A comparison of PSO and PSO-NM with varying probabilities of repositioning.

nPSOPSO-NM with probability of repositioning
0% 1% 2% 3% 4% 5% 7% 10%
592.8100.0100.0100.0100.0100.0100.0100.0100.0
799.6100.0100.0100.0100.0100.0100.0100.0100.0
10100.0100.0100.0100.0100.0100.0100.0100.0100.0
15100.0100.0100.0100.0100.0100.0100.0100.0100.0
20100.0100.0100.0100.0100.0100.0100.0100.0100.0
30100.0100.0100.0100.0100.0100.0100.0100.0100.0
40100.0100.0100.0100.0100.0100.0100.0100.0100.0
50100.0100.0100.0100.0100.0100.0100.0100.0100.0
100100.0100.0100.0100.0100.0100.0100.0100.0100.0
200100.0100.0100.0100.0100.0100.0100.0100.0100.0
300100.0100.0100.0100.0100.0100.0100.0100.0100.0
500100.0100.0100.0100.0100.0100.0100.0100.0100.0

Table B.2

Percentage of success in minimization of Schaffer function no 2 (2-D). A comparison of PSO and PSO-NM with varying probabilities of repositioning.

nPSOPSO-NM with probability of repositioning
0% 1% 2% 3% 4% 5% 7% 10%
585.499.799.699.499.299.499.297.389.6
798.5100.0100.099.999.6100.099.799.497.5
1099.6100.0100.0100.0100.0100.0100.0100.099.4
15100.0100.0100.0100.0100.0100.0100.0100.099.8
20100.0100.0100.0100.0100.0100.0100.0100.0100.0
30100.0100.0100.0100.0100.0100.0100.0100.0100.0
40100.0100.0100.0100.0100.0100.0100.0100.0100.0
50100.0100.0100.0100.0100.0100.0100.0100.0100.0
100100.0100.0100.0100.0100.0100.0100.0100.0100.0
200100.0100.0100.0100.0100.0100.0100.0100.0100.0
300100.0100.0100.0100.0100.0100.0100.0100.0100.0
500100.0100.0100.0100.0100.0100.0100.0100.0100.0

Table B.3

Percentage of success in minimization of Schaffer function no 3 (2-D). A comparison of PSO and PSO-NM with varying probabilities of repositioning.

nPSOPSO-NM with probability of repositioning
0% 1% 2% 3% 4% 5% 7% 10%
5 0.31.3 5.6 9.115.624.323.922.619.0
7 0.32.0 5.714.022.136.142.555.153.3
10 0.81.9 7.515.427.241.755.766.077.8
15 1.72.6 8.718.527.242.456.878.087.5
20 1.43.212.020.231.544.461.280.491.7
30 2.33.220.428.839.152.664.884.294.7
40 4.05.026.936.848.458.971.988.394.8
50 2.84.231.443.658.769.176.389.197.0
100 4.76.049.566.479.288.194.498.499.6
200 6.07.070.285.895.198.299.299.9 100.0
300 6.57.084.095.297.999.899.9100.0 100.0
500 9.47.894.098.699.6100.0100.0100.0 100.0

Table B.4

Percentage of success in minimization of Eggholder function (2-D). A comparison of PSO and PSO-NM with varying probabilities of repositioning.

nPSOPSO-NM with probability of repositioning
0% 1% 2% 3% 4% 5% 7% 10%
531.257.878.987.392.093.493.393.794.2
733.565.984.086.893.596.996.797.198.4
1051.870.492.794.696.497.499.199.199.5
1564.376.598.398.799.199.399.999.9100.0
2077.685.799.799.899.999.999.9100.0100.0
3089.394.8100.0100.0100.0100.0100.0100.0100.0
4094.996.1100.0100.0100.0100.0100.0100.0100.0
5097.598.4100.0100.0100.0100.0100.0100.0100.0
10099.8100.0100.0100.0100.0100.0100.0100.0100.0
200100.0100.0100.0100.0100.0100.0100.0100.0100.0
300100.0100.0100.0100.0100.0100.0100.0100.0100.0
500100.0100.0100.0100.0100.0100.0100.0100.0100.0

Table B.5

Percentage of success in minimization of Hölder table function (2-D). A comparison of PSO and PSO-NM with varying probabilities of repositioning.

nPSOPSO-NM with probability of repositioning
0% 1% 2% 3% 4% 5% 7% 10%
515.937.760.374.578.079.581.556.17.9
728.141.155.768.877.181.182.874.220.7
1038.548.062.466.575.681.384.981.043.9
1555.154.967.373.278.483.889.189.264.7
2061.167.069.175.780.884.690.594.774.9
3073.576.981.185.387.991.294.197.489.1
4083.185.388.087.191.994.297.299.395.2
5090.188.789.692.593.397.497.899.896.3
10098.598.097.198.899.399.999.9100.099.1
200100.0100.0100.099.8100.0100.0100.0100.0100.0
300100.0100.0100.0100.0100.0100.0100.0100.0100.0
500100.0100.0100.0100.0100.0100.0100.0100.0100.0

Table B.6

Percentage of success in minimization of Ackley’s function (2-D). A comparison of PSO and PSO-NM with varying probabilities of repositioning.

nPSOPSO-NM with probability of repositioning
0% 1% 2% 3% 4% 5% 7% 10%
530.766.669.670.668.567.666.672.574.0
764.669.770.872.376.274.970.673.974.5
1070.471.372.173.383.682.981.177.777.3
1579.577.876.480.684.388.490.689.283.8
2082.083.082.784.087.491.993.794.089.8
3083.588.289.288.389.694.196.398.496.1
4090.289.889.490.494.095.698.599.398.7
5092.993.394.295.194.696.498.199.999.7
10097.898.998.699.098.799.199.7100.0100.0
20099.799.999.7100.0100.0100.0100.0100.0100.0
300100.0100.0100.0100.0100.0100.0100.0100.0100.0
500100.0100.0100.0100.0100.0100.0100.0100.0100.0

Table B.7

Percentage of success in minimization of Beale’s function (2-D). A comparison of PSO and PSO-NM with varying probabilities of repositioning.

nPSOPSO-NM with probability of repositioning
0% 1% 2% 3% 4% 5% 7% 10%
517.693.398.598.098.898.799.699.277.3
747.197.899.699.398.799.599.999.794.6
1078.099.5100.099.899.8100.099.9100.099.8
1598.099.9100.0100.0100.0100.0100.0100.099.2
2099.9100.0100.0100.0100.0100.0100.0100.099.2
30100.0100.0100.0100.0100.0100.0100.0100.099.9
40100.0100.0100.0100.0100.0100.0100.0100.099.8
50100.0100.0100.0100.0100.0100.0100.0100.0100.0
100100.0100.0100.0100.0100.0100.0100.0100.0100.0
200100.0100.0100.0100.0100.0100.0100.0100.0100.0
300100.0100.0100.0100.0100.0100.0100.0100.0100.0
500100.0100.0100.0100.0100.0100.0100.0100.0100.0

Table B.8

Percentage of success in minimization of Goldstein-Price function (2-D). A comparison of PSO and PSO-NM with varying probabilities of repositioning.

nPSOPSO-NM with probability of repositioning
0% 1% 2% 3% 4% 5% 7% 10%
581.8100.0100.0100.0100.0100.0100.0100.099.4
797.3100.0100.0100.0100.0100.0100.0100.099.8
1099.8100.0100.0100.0100.0100.0100.0100.0100.0
15100.0100.0100.0100.0100.0100.0100.0100.099.8
20100.0100.0100.0100.0100.0100.0100.0100.099.9
30100.0100.0100.0100.0100.0100.0100.0100.0100.0
40100.0100.0100.0100.0100.0100.0100.0100.0100.0
50100.0100.0100.0100.0100.0100.0100.0100.0100.0
100100.0100.0100.0100.0100.0100.0100.0100.0100.0
200100.0100.0100.0100.0100.0100.0100.0100.0100.0
300100.0100.0100.0100.0100.0100.0100.0100.0100.0
500100.0100.0100.0100.0100.0100.0100.0100.0100.0

Table B.9

Percentage of success in minimization of Levy function no 13 (2-D). A comparison of PSO and PSO-NM with varying probabilities of repositioning.

nPSOPSO-NM with probability of repositioning
0% 1% 2% 3% 4% 5% 7% 10%
574.5100.0100.0100.099.9100.099.899.797.8
797.8100.0100.0100.0100.0100.0100.0100.0100.0
1099.9100.0100.0100.0100.0100.0100.0100.0100.0
15100.0100.0100.0100.0100.0100.0100.0100.099.7
20100.0100.0100.0100.0100.0100.0100.0100.0100.0
30100.0100.0100.0100.0100.0100.0100.0100.0100.0
40100.0100.0100.0100.0100.0100.0100.0100.0100.0
50100.0100.0100.0100.0100.0100.0100.0100.0100.0
100100.0100.0100.0100.0100.0100.0100.0100.0100.0
200100.0100.0100.0100.0100.0100.0100.0100.0100.0
300100.0100.0100.0100.0100.0100.0100.0100.0100.0
500100.0100.0100.0100.0100.0100.0100.0100.0100.0

Table B.10

Percentage of success in minimization of Six-Hump Camel function (2-D). A comparison of PSO and PSO-NM with varying probabilities of repositioning.

nPSOPSO-NM with probability of repositioning
0%1%2%3%4%5% 7% 10%
5 0.43.72.75.54.75.88.610.0 13.8
7 0.51.32.13.85.06.87.011.0 12.1
10 0.42.01.83.15.06.45.511.3 14.7
15 0.41.22.22.44.16.25.210.7 14.8
20 0.70.91.32.13.46.38.310.8 14.7
30 0.70.71.42.32.75.46.710.2 14.8
40 0.80.30.71.02.85.34.99.1 14.7
50 0.70.51.11.73.04.05.810.5 12.1
100 0.50.71.61.52.14.07.29.1 14.1
200 0.60.40.72.12.64.05.19.7 14.7
300 0.60.20.61.22.54.75.810.8 15.6
500 0.70.91.21.62.25.06.710.5 15.3

Table B.11

Percentage of success in minimization of Bukin function (2-D). A comparison of PSO and PSO-NM with varying probabilities of repositioning.

nPSOPSO-NM with probability of repositioning
0% 1% 2% 3% 4% 5% 7% 10%
596.9100.0100.0100.0100.0100.0100.0100.0100.0
799.9100.0100.0100.0100.0100.0100.0100.0100.0
10100.0100.0100.0100.0100.0100.0100.0100.0100.0
15100.0100.0100.0100.0100.0100.0100.0100.0100.0
20100.0100.0100.0100.0100.0100.0100.0100.0100.0
30100.0100.0100.0100.0100.0100.0100.0100.0100.0
40100.0100.0100.0100.0100.0100.0100.0100.0100.0
50100.0100.0100.0100.0100.0100.0100.0100.0100.0
100100.0100.0100.0100.0100.0100.0100.0100.0100.0
200100.0100.0100.0100.0100.0100.0100.0100.0100.0
300100.0100.0100.0100.0100.0100.0100.0100.0100.0
500100.0100.0100.0100.0100.0100.0100.0100.0100.0

Table B.12

Percentage of success in minimization of Cros-in-Tray function (2-D). A comparison of PSO and PSO-NM with varying probabilities of repositioning.

nPSOPSO-NM with probability of repositioning
0% 1% 2% 3% 4% 5% 7% 10%
531.069.286.290.591.290.991.091.980.6
751.974.891.795.296.696.396.797.395.9
1066.885.294.497.499.099.199.499.599.1
1582.190.797.099.299.799.8100.099.999.0
2087.796.199.0100.0100.0100.099.9100.098.9
3095.598.799.999.9100.0100.0100.0100.099.5
4098.999.8100.0100.0100.0100.0100.0100.0100.0
5099.699.899.9100.0100.0100.0100.0100.0100.0
100100.0100.0100.0100.0100.0100.0100.0100.0100.0
200100.0100.0100.0100.0100.0100.0100.0100.0100.0
300100.0100.0100.0100.0100.0100.0100.0100.0100.0
500100.0100.0100.0100.0100.0100.0100.0100.0100.0

Table B.13

Percentage of success in minimization of Drop Wave function (2-D). A comparison of PSO and PSO-NM with varying probabilities of repositioning.

nPSOPSO-NM with probability of repositioning
0% 1% 2% 3% 4% 5% 7% 10%
542.298.999.9100.099.9100.0100.0100.084.8
769.199.2100.0100.0100.0100.0100.0100.096.8
1086.899.6100.0100.0100.0100.0100.0100.099.5
1596.1100.0100.0100.0100.0100.0100.0100.099.9
2098.399.8100.0100.0100.0100.0100.0100.099.8
30100.0100.0100.0100.0100.0100.0100.0100.0100.0
40100.0100.0100.0100.0100.0100.0100.0100.0100.0
50100.0100.0100.0100.0100.0100.0100.0100.0100.0
100100.0100.0100.0100.0100.0100.0100.0100.0100.0
200100.0100.0100.0100.0100.0100.0100.0100.0100.0
300100.0100.0100.0100.0100.0100.0100.0100.0100.0
500100.0100.0100.0100.0100.0100.0100.0100.0100.0

Table B.14

Percentage of success in minimization of Rastrigin function (2-D). A comparison of PSO and PSO-NM with varying probabilities of repositioning.

nPSOPSO-NM with probability of repositioning
0% 1% 2% 3% 4% 5% 7% 10%
53.961.770.180.183.081.181.378.977.0
78.959.071.480.685.490.988.688.785.7
1021.364.368.680.086.493.094.593.392.6
1538.065.276.282.686.392.195.997.299.9
2054.770.079.082.687.393.495.997.396.3
3064.977.282.786.690.394.697.099.597.2
4073.880.388.391.293.296.797.999.799.2
5089.084.488.592.593.797.097.899.999.6
10089.092.596.197.498.999.399.9100.0 100.0
20095.696.598.999.7100.0100.0100.0100.0 100.0
30097.598.0100.0100.0100.099.9100.0100.0 100.0
50099.299.899.9100.0100.0100.0100.0100.0 100.0

Table B.15

Percentage of success in minimization of Griewank function (2-D). A comparison of PSO and PSO-NM with varying probabilities of repositioning.

nPSOPSO-NM with probability of repositioning
0% 1% 2% 3% 4% 5% 7% 10%
55.88.320.548.858.464.867.869.444.0
710.09.322.449.770.977.081.485.283.6
1016.614.429.348.974.085.690.893.697.5
1524.820.238.152.477.089.997.098.399.6
2029.827.347.761.981.394.597.599.3 100.0
3045.536.558.678.191.397.099.5100.0 100.0
4052.243.974.590.296.798.999.9100.0 100.0
5062.252.585.294.798.2100.0100.0100.0 100.0
10082.677.397.499.9100.0100.0100.0100.0 100.0
20097.296.4100.0100.0100.0100.0100.0100.0 100.0
30099.999.4100.0100.0100.0100.0100.0100.0 100.0
500100.099.8100.0100.0100.0100.0100.0100.0 100.0

Table B.16

Percentage of success in minimization of Schwefel function (2-D). A comparison of PSO and PSO-NM with varying probabilities of repositioning.

nPSOPSO-NM with probability of repositioning
0% 1% 2% 3% 4% 5% 7% 10%
55.992.099.2100.099.999.999.384.914.2
725.291.399.699.9100.0100.0100.096.939.5
1050.495.899.8100.0100.0100.0100.099.872.9
1575.497.3100.0100.0100.0100.0100.0100.093.0
2083.997.9100.0100.0100.0100.0100.0100.098.8
3094.598.7100.0100.0100.0100.0100.0100.099.8
4097.299.5100.0100.0100.0100.0100.0100.099.9
5098.9100.0100.0100.0100.0100.0100.0100.0100.0
10099.9100.0100.0100.0100.0100.0100.0100.0100.0
200100.0100.0100.0100.0100.0100.0100.0100.0100.0
300100.0100.0100.0100.0100.0100.0100.0100.0100.0
500100.0100.0100.0100.0100.0100.0100.0100.0100.0

Table B.17

Percentage of success in minimization of Rastrigin function (3-D). A comparison of PSO and PSO-NM with varying probabilities of repositioning.

nPSOPSO-NM with probability of repositioning
0% 1% 2% 3% 4% 5% 7% 10%
50.013.116.519.720.820.715.313.710.8
70.411.317.420.925.826.020.323.119.2
101.211.414.922.729.234.732.929.927.5
153.415.720.226.933.744.142.538.439.6
2011.114.521.627.838.545.649.450.749.4
3016.322.928.733.543.749.256.361.058.1
4025.531.532.437.646.053.660.366.767.8
5030.934.039.345.349.259.664.876.274.3
10050.354.259.060.464.672.480.588.291.0
20067.471.270.077.680.385.390.597.899.1
30077.979.983.287.189.893.695.899.299.4
50087.089.992.993.996.098.098.199.9 100.0

Table B.18

Percentage of success in minimization of Griewank function (3-D). A comparison of PSO and PSO-NM with varying probabilities of repositioning.

nPSOPSO-NM with probability of repositioning
0% 1% 2% 3% 4% 5% 7% 10%
51.01.71.83.24.47.56.73.90.0
71.42.02.54.312.927.640.345.717.3
102.42.83.66.013.525.849.767.259.1
154.73.86.35.914.827.446.478.789.2
206.24.77.411.720.633.350.484.195.0
308.35.910.821.427.243.660.891.198.9
409.78.114.727.739.755.370.493.699.6
5017.48.219.432.947.664.778.495.899.9
10026.716.241.362.578.691.496.299.7 100.0
20045.331.471.887.598.299.699.9100.0 100.0
30059.146.486.197.699.8100.0100.0100.0 100.0
50076.765.296.699.8100.0100.0100.0100.0 100.0

Table B.19

Percentage of success in minimization of Schwefel function (3-D). A comparison of PSO and PSO-NM with varying probabilities of repositioning.

nPSOPSO-NM with probability of repositioning
0% 1% 2% 3% 4% 5% 7% 10%
51.492.583.163.326.51.90.20.00.0
793.096.692.683.353.514.11.70.00.0
1099.098.297.190.775.538.48.50.10.0
1599.799.598.997.391.866.132.11.30.0
2099.899.999.498.696.385.756.33.20.0
30100.0100.0100.0100.099.295.080.011.60.2
40100.0100.0100.099.799.698.791.725.20.2
50100.0100.099.999.9100.099.693.941.50.3
100100.0100.0100.0100.0100.099.999.585.66.2
200100.0100.0100.0100.0100.0100.0100.098.4 27.0
300100.0100.0100.0100.0100.0100.0100.0100.0 56.0
500100.0100.0100.0100.0100.0100.0100.0100.0 83.6

Table B.20

Percentage of success in minimization of Rosenbrock function (4-D). A comparison of PSO and PSO-NM with varying probabilities of repositioning.

nPSOPSO-NM with probability of repositioning
0% 1% 2% 3% 4% 5% 7% 10%
50.199.498.393.256.24.81.30.00.0
785.299.898.897.283.531.63.80.20.0
1099.999.999.497.893.668.619.11.00.1
1599.799.999.898.898.090.257.82.70.1
20100.099.999.899.499.396.182.09.20.8
30100.0100.0100.099.899.899.695.032.41.2
40100.0100.0100.099.999.999.998.958.53.5
50100.0100.0100.0100.0100.0100.099.576.64.0
100100.0100.0100.0100.0100.0100.0100.099.1 25.2
200100.0100.0100.0100.0100.0100.0100.0100.0 73.7
300100.0100.0100.0100.0100.0100.0100.0100.0 91.9
500100.0100.0100.0100.0100.0100.0100.0100.0 99.5

Table B.21

Percentage of success in minimization of Wood function (4-D). A comparison of PSO and PSO-NM with varying probabilities of repositioning.

nPSOPSO-NM with probability of repositioning
0% 1% 2% 3% 4% 5% 7% 10%
50.00.20.40.20.50.50.30.00.0
70.10.10.70.30.40.91.12.20.5
100.10.50.21.21.01.62.29.520.6
151.20.30.71.52.12.53.511.041.6
201.60.91.01.23.03.75.114.250.0
301.00.81.92.74.07.110.023.758.9
402.81.12.02.65.910.216.630.968.5
502.41.02.24.16.511.320.136.476.5
1003.52.15.310.420.331.744.876.296.1
2007.94.512.826.948.967.982.496.6 100.0
30014.48.822.246.568.084.394.599.6 100.0
50020.310.639.569.287.497.699.6100.0 100.0

Table B.22

Percentage of success in minimization of Schwefel function (4-D). A comparison of PSO and PSO-NM with varying probabilities of repositioning.

nPSOPSO-NM with probability of repositioning
0% 1% 2% 3% 4% 5% 7% 10%
50.00.80.51.41.71.41.51.70.7
70.21.01.63.83.02.22.41.61.1
100.00.92.84.16.84.65.73.81.6
150.12.53.66.911.511.010.36.34.5
200.72.83.77.510.915.814.411.08.0
302.34.26.39.113.318.219.915.6 13.1
403.15.17.811.616.119.628.122.4 17.5
505.56.813.913.417.723.827.027.4 22.1
10015.616.619.221.225.330.939.142.6 37.0
20029.929.833.836.639.244.255.066.1 62.2
30039.536.639.745.749.555.762.678.8 71.8
50054.254.057.359.061.968.176.091.9 86.9

Table B.23

Percentage of success in minimization of Griewank function (4-D). A comparison of PSO and PSO-NM with varying probabilities of repositioning.

nPSOPSO-NM with probability of repositioning
0% 1% 2% 3% 4% 5% 7% 10%
50.378.597.898.998.296.289.035.30.5
77.086.398.899.999.999.597.874.73.4
1021.990.799.4100.0100.099.999.895.114.3
1540.893.399.7100.0100.0100.0100.099.445.4
2055.994.299.9100.0100.0100.0100.0100.073.7
3071.794.199.7100.0100.0100.0100.0100.092.7
4084.496.299.8100.0100.0100.0100.0100.097.4
5091.897.399.8100.0100.0100.0100.0100.098.9
10098.799.7100.0100.0100.0100.0100.0100.0100.0
200100.0100.0100.0100.0100.0100.0100.0100.0100.0
300100.0100.0100.0100.0100.0100.0100.0100.0100.0
500100.0100.0100.0100.0100.0100.0100.0100.0100.0

Table B.24

Percentage of success in minimization of Rastrigin function (4-D). A comparison of PSO and PSO-NM with varying probabilities of repositioning.

Conflict of interest

The authors declare no conflicts of interest.

Acknowledgement

The author would like to thank Marcel Joly for his comments, which helped in the construction of this method and improved the quality of this work.

References

  1. 1.
    Foroughi Nematollahi A, Rahiminejad A, Vahidi B. A novel physical based meta-heuristic optimization method known as Lightning Attachment Procedure Optimization. Appl Soft Comput. 2017 Oct;59: 596621.
  2. 2.
    Okamoto T, Hirata H. Global optimization using a multipoint type quasi-chaotic optimization method. Appl Soft Comput. 2013 Feb;13(2):12471264.
  3. 3.
    Kirkpatrick S, Gelatt CD Jr, Vecchi MP. Optimization by simulated annealing. Science. 1983;220(4598):671680.
  4. 4.
    Storn R, Price K. Differential evolution – a simple and efficient heuristic for global optimization over continuous spaces. J Glob Optim. 1997;11(4):341359.
  5. 5.
    Alatas B. ACROA: Artificial Chemical Reaction Optimization Algorithm for global optimization. Expert Syst Appl. 2011 Sep;38(10):1317013180.
  6. 6.
    Kaveh A. Water evaporation optimization algorithm. In: Advances in metaheuristic algorithms for optimal design of structures. Cham: Springer International Publishing; 2017. p. 489509.
  7. 7.
    Shareef H, Ibrahim AA, Mutlag AH. Lightning search algorithm. Appl Soft Comput. 2015 Nov;36: 315333.
  8. 8.
    Rashedi E, Nezamabadi-pour H, Saryazdi S. GSA: A Gravitational Search Algorithm. Inf Sci. 2009 Jun;179(13):22322248.
  9. 9.
    Hatamlou A. Black hole: a new heuristic optimization approach for data clustering. Inf Sci. 2013 Feb;222: 175184.
  10. 10.
    Kennedy J, Eberhart R. Particle swarm optimization. In: Proceedings of International Conference on Neural Networks. vol. 4, 1995. p. 19421948.
  11. 11.
    Pacheco da Luz EF, Becceneri JC, de Campos Velho HF. A new multi-particle collision algorithm for optimization in a high performance environment. J Comput Interdiscip Sci. 2008;1(1):310.
  12. 12.
    Yang X-S. Firefly algorithms for multimodal optimization. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), LNCS, vol. 5792, Berlin, Heidelberg: Springer; 2009. p. 169178.
  13. 13.
    Askarzadeh A. A novel metaheuristic method for solving constrained engineering optimization problems: Crow search algorithm. Comput Struct. 2016 Jun;169: 112.
  14. 14.
    Mirjalili S, Lewis A. The whale optimization algorithm. Adv Eng Softw. 2016 May;95: 5167.
  15. 15.
    Mucherino A, Seref O, Seref O, Erhun Kundakcioglu O, Pardalos P. Monkey search: a novel metaheuristic search for global optimization. In: AIP Conference Proceedings, AIP. vol. 953, Melville, NY: AIP Publishing; 2007. p. 162173.
  16. 16.
    Yang XS. A new metaheuristic Bat-inspired Algorithm. Studies in Computational Intelligence, vol. 284, Berlin, Heidelberg: Springer; 2010. p. 6574.
  17. 17.
    Karaboga D, Basturk B. A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm. J Glob Optim. 2007 Oct;39(3):459471.
  18. 18.
    Mirjalili S, Mirjalili SM, Lewis A. Grey wolf optimizer. Adv Eng Softw. 2014 Mar;69: 4661.
  19. 19.
    Pan W-T. A new fruit fly optimization algorithm: taking the financial distress model as an example. Knowl Based Syst. 2012 Feb;26: 6974.
  20. 20.
    Mirjalili S. Dragonfly algorithm: a new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems. Neural Comput Appl. 2016 May;27(4):10531073.
  21. 21.
    Cuevas E, Cienfuegos M. A new algorithm inspired in the behavior of the social-spider for constrained optimization. Expert Syst Appl. 2014 Feb;41(2):412425.
  22. 22.
    Rajabioun R. Cuckoo optimization algorithm. Appl Soft Comput. 2011 Dec;11(8):55085518.
  23. 23.
    Yazdani M, Jolai F. Lion optimization algorithm (LOA): a nature-inspired metaheuristic algorithm. J Comput Des Eng. 2016 Jan;3(1):2436.
  24. 24.
    Glover F. Tabu Search—Part I. ORSA J Comput. 1989 Aug;1(3):190206.
  25. 25.
    Glover F. Tabu Search—Part II. ORSA J Comput. 1990 Feb;2(1):432.
  26. 26.
    Husseinzadeh Kashan A. League championship algorithm (LCA): an algorithm for global optimization inspired by sport championships. Appl Soft Comput. 2014 Mar;16: 171200.
  27. 27.
    Moosavian N, Kasaee Roodsari B. Soccer league competition algorithm: a novel meta-heuristic algorithm for optimal design of water distribution networks. Swarm Evol Comput. 2014 Aug;17: 1424.
  28. 28.
    Tan Y, Zhu Y. Fireworks algorithm for optimization. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), LNCS, vol. 6145, Berlin, Heidelberg: Springer; 2010. p. 355364.
  29. 29.
    Kaveh A, Mahdavi VR. Colliding bodies optimization method for optimum design of truss structures with continuous variables. Adv Eng Softw. 2014 Apr;70: 112.
  30. 30.
    Kaveh A. Tug of war optimization. Advances in Metaheuristic Algorithms for Optimal Design of Structures, Cham: Springer International Publishing; 2017. p. 451487.
  31. 31.
    Kaveh A, Dadras A. A novel meta-heuristic optimization algorithm: thermal exchange optimization. Adv Eng Softw. 2017 Aug;110: 6984.
  32. 32.
    Javidy B, Hatamlou A, Mirjalili S. Ions motion algorithm for solving optimization problems. Appl Soft Comput. 2015 Jul;32: 7279.
  33. 33.
    Yapici H, Cetinkaya N. A new meta-heuristic optimizer: pathfinder algorithm. Appl Soft Comput J. 2019 May;78: 545568.
  34. 34.
    Hayyolalam V, Kazem AAP. Black widow optimization algorithm: a novel meta-heuristic approach for solving engineering optimization problems. Eng Appl Artif Intell. 2020 Jan;87: 103249.
  35. 35.
    Ghasemi M, Davoudkhani IF, Akbari E, Rahimnejad A, Ghavidel S, Li L. A novel and effective optimization algorithm for global optimization and its engineering applications: Turbulent Flow of Water-based Optimization (TFWO). Eng Appl Artif Intell. 2020 Jun;92: 103666.
  36. 36.
    Heidari AA, Pahlavani P. An efficient modified grey wolf optimizer with Lévy flight for optimization tasks. Appl Soft Comput. 2017 Nov;60: 115134.
  37. 37.
    Yılmaz S, Küçüksille Ecir U. A new modification approach on bat algorithm for solving optimization problems. Appl Soft Comput. 2015 Mar;28: 259275.
  38. 38.
    Beheshti Z, Shamsuddin SM. Non-parametric particle swarm optimization for global optimization. Appl Soft Comput. 2015 Mar;28: 345359.
  39. 39.
    Wang L, Yang B, Orchard J. Particle swarm optimization using dynamic tournament topology. Appl Soft Comput. 2016 Nov;48: 584596.
  40. 40.
    Yan B, Zhao Z, Zhou Y, Yuan W, Li J, Wu J, Cheng D. A particle swarm optimization algorithm with random learning mechanism and Levy flight for optimization of atomic clusters. Comput Phys Commun. 2017 Oct;219: 7986.
  41. 41.
    Yan J, He W, Jiang X, Zhang Z. A novel phase performance evaluation method for particle swarm optimization algorithms using velocity-based state estimation. Appl Soft Comput. 2017 Aug;57: 517525.
  42. 42.
    Kiran MS. Particle swarm optimization with a new update mechanism. Appl Soft Comput. 2017 Nov;60: 670678.
  43. 43.
    Chen Y, Li L, Peng H, Xiao J, Yang Y, Shi Y. Particle swarm optimizer with two differential mutation. Appl Soft Comput J. 2017 Dec;61: 314330.
  44. 44.
    Yelghi A, Köse C. A modified firefly algorithm for global minimum optimization. Appl Soft Comput. 2018 Jan;62: 2944.
  45. 45.
    Meng T, Pan Q-K. An improved fruit fly optimization algorithm for solving the multidimensional knapsack problem. Appl Soft Comput. 2017 Jan;50: 7993.
  46. 46.
    Wan Y, Wang M, Ye Z, Lai X. A feature selection method based on modified binary coded ant colony optimization algorithm. Appl Soft Comput. 2016 Dec;49: 248258.
  47. 47.
    Karaboga D, Akay B. A modified artificial bee colony (ABC) algorithm for constrained optimization problems. Appl Soft Comput. 2011 Apr;11(3):30213031.
  48. 48.
    Nasiri MM. A modified ABC algorithm for the stage shop scheduling problem. Appl Soft Comput J. 2015 Mar;28: 8189.
  49. 49.
    Ghambari S, Rahati A. An improved artificial bee colony algorithm and its application to reliability optimization problems. Appl Soft Comput. 2017 Oct;62: 736767.
  50. 50.
    Lynn N, Suganthan PN. Ensemble particle swarm optimizer. Appl Soft Comput. 2017 Jun;55: 533548.
  51. 51.
    Javidrad F, Nazari M. A new hybrid particle swarm and simulated annealing stochastic optimization method. Appl Soft Comput. 2017 Nov;60: 634654.
  52. 52.
    Mousa AA, El-Shorbagy MA, Abd-El-Wahed WF. Local search based hybrid particle swarm optimization algorithm for multiobjective optimization. Swarm Evol Comput. 2012 Apr;3: 114.
  53. 53.
    Razmara Shooli A, Vosoughi AR, Banan MR. A mixed GA-PSO-based approach for performance-based design optimization of 2D reinforced concrete special moment-resisting frames. Appl Soft Comput J. 2019 Dec;85: 105843.
  54. 54.
    García Nieto PJ, García-Gonzalo E, Alonso Fernández JR, Díaz Muñiz C. A hybrid PSO optimized SVM-based model for predicting a successful growth cycle of the Spirulina platensis from raceway experiments data. J Comput Appl Math. 2016 Jan;291: 293303.
  55. 55.
    Mahi M, Baykan ÖK, Kodaz H. A new hybrid method based on particle swarm optimization, ant colony optimization and 3-opt algorithms for traveling salesman problem. Appl Soft Comput. 2015 May;30: 484490.
  56. 56.
    Jadon SS, Tiwari R, Sharma H, Bansal JC. Hybrid Artificial Bee Colony algorithm with Differential Evolution. Appl Soft Comput. 2017 Sep;58: 1124.
  57. 57.
    Sun G, Zhang A, Yao Y, Wang Z. A novel hybrid algorithm of gravitational search algorithm with genetic algorithm for multi-level thresholding. Appl Soft Comput. 2016 Sep;46: 703730.
  58. 58.
    Gnandt C, Callies R. CGRS — An advanced hybrid method for global optimization of continuous functions closely coupling extended random search and conjugate gradient method. J Comput Appl Math. 2018 May;333: 99115.
  59. 59.
    Khojaste Sarakhsi M, Fatemi Ghomi SMT, Karimi B. A new hybrid algorithm of scatter search and Nelder–Mead algorithms to optimize joint economic lot sizing problem. J Comput Appl Math. 2016 Jan;292: 387401.
  60. 60.
    Noack MM, Funke SW. Hybrid genetic deflated Newton method for global optimisation. J Comput Appl Math. 2017;325: 97112.
  61. 61.
    Torkaman S, Fatemi Ghomi SMT, Karimi B. Hybrid simulated annealing and genetic approach for solving a multi-stage production planning with sequence-dependent setups in a closed-loop supply chain. Appl Soft Comput. 2017 Oct;71: 10851104.
  62. 62.
    Lin Y, Bian Z, Liu X. Developing a dynamic neighborhood structure for an adaptive hybrid simulated annealing – tabu search algorithm to solve the symmetrical traveling salesman problem. Appl Soft Comput. 2016 Dec;49: 937952.
  63. 63.
    Dey A, Dey S, Bhattacharyya S, Platos J, Snasel V. Novel quantum inspired approaches for automatic clustering of gray level images using Particle Swarm Optimization, Spider Monkey Optimization and Ageist Spider Monkey Optimization algorithms. Appl Soft Comput J. 2020 Mar;88: 106040.
  64. 64.
    Liu M, Yao X, Li Y. Hybrid whale optimization algorithm enhanced with Lévy flight and differential evolution for job shop scheduling problems. Appl Soft Comput J. 2020 Feb;87: 105954.
  65. 65.
    Mosa MA. A novel hybrid particle swarm optimization and gravitational search algorithm for multi-objective optimization of text mining. Appl Soft Comput J. 2020 May;90: 106189.
  66. 66.
    Nelder JA, Mead R. A simplex method for function minimization. Comput J. 1965;7(4):308313.
  67. 67.
    Fan SKS, Zahara E. A hybrid simplex search and particle swarm optimization for unconstrained optimization. Eur J Oper Res. 2007;181(2):527548.
  68. 68.
    Hsu CC, Gao CH. Particle swarm optimization incorporating simplex search and center particle for global optimization. In: SMCia/08 - Proceedings of the 2008 IEEE Conference on Soft Computing on Industrial Applications. 2008. p. 2631.
  69. 69.
    Zahara E, Hu C-H. Solving constrained optimization problems with hybrid particle swarm optimization. Eng Optim. 2008 Nov;40(11):10311049.
  70. 70.
    Liu A, Yang M-T. A new hybrid nelder-mead particle swarm optimization for coordination optimization of directional overcurrent relays. Math Probl Eng. 2012;2012: 118.
  71. 71.
    Vakil Baghmisheh MT, Peimani M, Sadeghi MH, Ettefagh MM, Tabrizi AF. A hybrid particle swarm-Nelder–Mead optimization method for crack detection in cantilever beams. Appl. Soft Comput.. 2012;12: 22172226.
  72. 72.
    Mesbahi T, Khenfri F, Rizoug N, Chaaban K, Bartholomeüs P, Le Moigne P. Dynamical modeling of Li-ion batteries for electric vehicle applications based on hybrid Particle Swarm-Nelder–Mead (PSO-NM) optimization algorithm. Electr Power Syst Res. 2016;131: 195204.
  73. 73.
    Wang L, Cao Q, Zhang Z, Mirjalili S, Zhao W. Artificial rabbits optimization: A new bio-inspired meta-heuristic algorithm for solving engineering optimization problems. Eng Appl Artif Intell. 2022;114: 105082.
  74. 74.
    Khalid OW, Mat Isa NA, Mat Sakim HA. Emperor penguin optimizer: A comprehensive review based on state-of-the-art meta-heuristic algorithms. Alex Eng J. 2022;63: 487526.
  75. 75.
    Rahman I, Mohamad-Saleh J. Hybrid bio-inspired computational intelligence techniques for solving power system optimization problems: a comprehensive survey. Appl Soft Comput. 2018 Aug;69: 72130.
  76. 76.
    Devi SA, Vijayalakshmi C. Bio inspired optimization algorithms in disaster. Procedia Comput Sci. 2020;172: 176180.
  77. 77.
    Vinh Luu T, Nguyen NS. Parameters extraction of solar cells using modified JAYA algorithm. Optik. 2020 Feb;203: 164034.
  78. 78.
    Azad AS, Rahaman MdSA, Watada J, Vasant P, Gamez Vintaned JA. Optimization of the hydropower energy generation using meta-heuristic approaches: a review. Energy Rep. 2020;6: 22302248.
  79. 79.
    Ahmadianfar I, Khajeh Z, Asghari-Pari S-A, Chu X. Developing optimal policies for reservoir systems using a multi-strategy optimization algorithm. Appl Soft Comput. 2019 Jul;80: 888903.
  80. 80.
    Mirjalili S. SCA: a sine cosine algorithm for solving optimization problems. Knowl Based Syst. 2016 Mar;96: 120133.
  81. 81.
    Lu D, Ma Y, Kong F, Guo C, Miao J, Du X. Support vector regression with heuristic optimization algorithms for predicting the ground surface displacement induced by epb shield tunneling. Gondwana Res. 2022; doi:10.1016/j.gr.2022.07.002.
  82. 82.
    Krishna Reddy AKV, Venkata Lakshmi Narayana K. Meta-heuristics optimization in electric vehicles-an extensive review. Renew Sustain Energy Rev. 2022;160: 112285.
  83. 83.
    Machado-Coelho TM, Machado AMC, Jaulin L, Ekel P, Pedrycz W, Soares GL. An interval space reducing method for constrained problems with particle swarm optimization. Appl Soft Comput. 2017 Oct;59: 405417.
  84. 84.
    Eberhart R, Shi Y. Tracking and optimizing dynamic systems with particle swarms. In: Proceedings of the 2001 Congress on Evolutionary Computation. 2001. p. 94100.
  85. 85.
    Spendley W, Hext GR, Himsworth FR. Sequential application of simplex designs in optimisation and evolutionary operation. Technometrics. 1962;4(4):441461.

Written by

Vilmar Steffen

Article Type: Research Paper

Date of acceptance: September 2022

Date of publication: October 2022

DOI: https://doi.org/10.5772/acrt.11

Copyright: ©2022 The Author(s), Licensee IntechOpen, License: CC BY 4.0

© The Author(s) 2022. Licensee IntechOpen. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited.


Article accomplishments

3014

Downloads

430

Views

0

Citations

0

Mentions


Share this article

Join us today!

Submit your Article