# Performance of Particle Swarm Optimization in One Max Problem

Particle Swarm Optimization (PSO) is a population-based optimization algorithm that is generally developed by observing the social behavior of bird and fish swarms 🐦🐟 [1]. This optimization algorithm is often a very frequently used algorithm in the literature. In this article, I will explain the use and performance of the PSO (Particle Swarm Optimization) algorithm to solve the One Maximization problem. But first, let’s consolidate the information I have collected for you by learning from different sources so that we can comprehend the logic of the algorithm.

In the development of the PSO algorithm, which is one of the most used methods of metasestical optimization, the ability to adapt to the environment, to search for rich food sources and to escape from predators is based on the approach of information slharing. 🌽 🥗 [2].

Particle Swarm Optimization was inspired by the observation that the movements of animals moving in packs, while meeting their basic needs, such as finding food, affect other individuals in the swarm and achieve the purpose of the swarm more easily. Kennedy and Dr. It is an optimization algorithm developed by Eberhart in 1995 [4].

As is often seen in other optimization algorithms, particles tend to follow the best in the PSO algorithm. For this reason, after a while all the particles are clustered in one place looking for the solution [4]. When you run the code of the PSO algorithm, after a while you will observe that the particles are actually clustered somewhere 👀

The particles try to find the best position by going to different positions with velocity change based on this velocity formula. The particle that is in the best position is followed because it is worked with herd logic. A cluster is seen in this way.

##### Calculation Phase In The Use Of One Max Problem

The One Max problem is a simple optimization problem. The goal of the problem is to maximize the number of those in a workable solution. The formula for the problem is given below. D contained in this formula denotes the size of the problem 🔖

For the maximization problem you see above, the cost function to be used in the PSO algorithm has been determined as Sigmoid. The Sigmoid function controls the values it takes with 0.5 . With this function, the particles have all the particles in the Dimension Sigmoid’den output above 0.5 value is to increase. Thus, output above 0.5 will be equal to 1. When other studies studied on this algorithm in the literature are examined, w, c1 and c2 values in the speed change formula of the particles are taken differently 💡

##### Statistical Results with Wilcoxon

First, the w value is 0.005, c1 value is 0.005 and c2 value is 0.009 . A total of 40 particles are studied in the population. Based on these values, it has been run 30 times for dimensions D=100, 500 and 1000 respectively. The value specified as D indicates the problem size. Accordingly, the following table prints the result values calculated with Wilcoxon 📝

##### D size of problem: 1000, number of iterations : 100.000 and total 30 times run

Then the w value is 0.4, c1 value is 1.5 and c2 value is 2. A total of 40 particles are studied in the population. Based on these values, it has been run 30 times for dimensions D=100, 500 and 1000 respectively. The value specified as D indicates the problem size. Accordingly, the following table prints the result values calculated with Wilcoxon 📝

##### D size of problem: 1000, number of iterations : 100.000 and total 30 times run

🕵 The mean, median and standard deviation increase as the values w, c1, c2 are increased compared to these values. The fact that the standard deviation is large also means that particles in the series i.e. population deviate from the mean. With the above preferred values, the best result is that the minimum value is reached more easily.

🏋 It was observed that when the w value in the first values is 0.005, c1 value is 0.005 and c2 value is 0.009, the desired result is more easily achieved. D = 100, 500 and 1000. when increased to size the value of the best position has been seen to get smaller and smaller, and this is a desirable situation.

NOTE 🌠

The best_globalbest value for the PSO algorithm is held in a numpy array and then the following values are reached by applying the Wilcoxon statistic.

📌If you want to review the code, I’ll publish it to you as a GitHub repo. I wish you good work 🐞

REFERENCES

[1] M. Yasin OZSAGLAM, Mehmet CUNKAS, particle swarm optimization algorithm for solving optimization problems, Polytechnic, Volume:11 Issue: 4 p.299-305, 2008.

[2] Fırat University, BMÜ-579 Metasestical methods, particle swarm optimization.

[3] Pablo J. Villacorta, Swarm Intelligence Metaheuristics, part 2: Particle Swarm Optimization, https://www.stratio.com/blog/swarm-intelligence-metaheuristics-part-2-particle-swarm-optimization/.

[4] Muhammad Pektas, April 2019, taken from Particle swarm optimization (PSO) Medium article.

## Benzetimli Tavlama (Simulated Annealing) Algoritması

BENZETİMLİ TAVLAMA ALGORİTMASI Herkese merhabalar, optimize kelimesi aslında günlük hayatta...

## Python Data Science Libraries 2 – Numpy Methodology

One of the most important and fundamental libraries in Python...