Data Scientist, Data Analyst, Data Engineer…
“Data” is everywhere and there are data-related jobs. Many professionals often ask, “What is the difference between Data Science, Machine Learning, and Big Data?” I would like to give some insights related to this frequently asked question.

Particle Swarm Optimization (PSO) is a population-based optimization algorithm that is generally developed by observing the social behavior of bird and fish swarms 🐦🐟 [1]. This optimization algorithm is often a very frequently used algorithm in the literature. In this article, I will explain the use and performance of the PSO (Particle Swarm Optimization) algorithm to solve the One Maximization problem. But first, let’s consolidate the information I have collected for you by learning from different sources so that we can comprehend the logic of the algorithm.

In the development of the PSO algorithm, which is one of the most used methods of metasestical optimization, the ability to adapt to the environment, to search for rich food sources and to escape from predators is based on the approach of information slharing. 🌽 🥗 [2].

Particle Swarm Optimization was inspired by the observation that the movements of animals moving in packs, while meeting their basic needs, such as finding food, affect other individuals in the swarm and achieve the purpose of the swarm more easily. Kennedy and Dr. It is an optimization algorithm developed by Eberhart in 1995 [4].

As is often seen in other optimization algorithms, particles tend to follow the best in the PSO algorithm. For this reason, after a while all the particles are clustered in one place looking for the solution [4]. When you run the code of the PSO algorithm, after a while you will observe that the particles are actually clustered somewhere 👀

The particles try to find the best position by going to different positions with velocity change based on this velocity formula. The particle that is in the best position is followed because it is worked with herd logic. A cluster is seen in this way.

Calculation Phase In The Use Of One Max Problem

The One Max problem is a simple optimization problem. The goal of the problem is to maximize the number of those in a workable solution. The formula for the problem is given below. D contained in this formula denotes the size of the problem 🔖

For the maximization problem you see above, the cost function to be used in the PSO algorithm has been determined as Sigmoid. The Sigmoid function controls the values it takes with 0.5 . With this function, the particles have all the particles in the Dimension Sigmoid’den output above 0.5 value is to increase. Thus, output above 0.5 will be equal to 1. When other studies studied on this algorithm in the literature are examined, w, c1 and c2 values in the speed change formula of the particles are taken differently 💡

Statistical Results with Wilcoxon

First, the w value is 0.005, c1 value is 0.005 and c2 value is 0.009 . A total of 40 particles are studied in the population. Based on these values, it has been run 30 times for dimensions D=100, 500 and 1000 respectively. The value specified as D indicates the problem size. Accordingly, the following table prints the result values calculated with Wilcoxon 📝

D size of problem: 100, number of iterations : 100.000 and total 30 times run

Mean

Median

Best Value

Worst Value

Standard Deviation

0.0031551897084034007

0.0025380204697822194

0.00023797210224091314

0.008994153476226206

0.0020874904616420636

D size of problem: 500, number of iterations : 100.000 and total 30 times run

Mean

Median

Best Value

Worst Value

Standard Deviation

0.0024457287103268333

0.002274384447498488

0.0001588232061854383

0.0001588232061854383

0.0016983186907986983

D size of problem: 1000, number of iterations : 100.000 and total 30 times run

Mean

Median

Best Value

Worst Value

Standard Deviation

0.0014781328189136006

0.0010733144689043976

7.397751636520865e-05

0.004376988570160589

0.0010840516145429367

Then the w value is 0.4, c1 value is 1.5 and c2 value is 2. A total of 40 particles are studied in the population. Based on these values, it has been run 30 times for dimensions D=100, 500 and 1000 respectively. The value specified as D indicates the problem size. Accordingly, the following table prints the result values calculated with Wilcoxon 📝

D size of problem: 100, number of iterations : 100.000 and total 30 times run

Mean

Median

Best Value

Worst Value

Standard Deviation

0.10290449068126414

0.06354761821949564

0.001496700649763677

0.4405023200617695

0.11308392205322863

D size of problem: 500, number of iterations : 100.000 and total 30 times run

Mean

Median

Best Value

Worst Value

Standard Deviation

0.11090339649603935

0.07350043038100676

0.0010117064651082308

0.5125380033802425

0.11343766955639699

D size of problem: 1000, number of iterations : 100.000 and total 30 times run

Mean

Median

Best Value

Worst Value

Standard Deviation

0.1276016324307818

0.048701691139323475

0.0005810872467598704

0.7015965709030323

0.15191292927119768

🕵 The mean, median and standard deviation increase as the values w, c1, c2 are increased compared to these values. The fact that the standard deviation is large also means that particles in the series i.e. population deviate from the mean. With the above preferred values, the best result is that the minimum value is reached more easily.

🏋 It was observed that when the w value in the first values is 0.005, c1 value is 0.005 and c2 value is 0.009, the desired result is more easily achieved. D = 100, 500 and 1000. when increased to size the value of the best position has been seen to get smaller and smaller, and this is a desirable situation.

NOTE 🌠

The best_globalbest value for the PSO algorithm is held in a numpy array and then the following values are reached by applying the Wilcoxon statistic.

📌If you want to review the code, I’ll publish it to you as a GitHub repo. I wish you good work 🐞

REFERENCES

[1] M. Yasin OZSAGLAM, Mehmet CUNKAS, particle swarm optimization algorithm for solving optimization problems, Polytechnic, Volume:11 Issue: 4 p.299-305, 2008.

## Recent Comments