ACTIVE POWER LOSS REDUCTION BY ASSORTED ALGORITHMS

This paper presents assorted algorithms for solving optimal reactive power problem. Symbiosis modeling (SM), which extends the dynamics of the canonical PSO algorithm by adding a significant ingredient that takes into account the symbiotic co evolution between species, Hybridization of Evolutionary algorithm with Conventional Algorithm (HCA) that uses the abilities of evolutionary and conventional algorithm and Genetical Swarm Optimization (GS), which combines Genetic Algorithms (GA) and Particle Swarm Optimization (PSO).All the above said SM, HCA,GS algorithms are used to augment the convergence rate with good Exploration & Exploitation. All the three SM, HCA, GS is applied to Reactive Power optimization problem and has been evaluated in standard IEEE 30 System. The results shows that all the three algorithms perform well in solving the reactive power problem with rapid convergence rate .Of all the three algorithms SM has the slight edge in reducing the real power loss over HCA&GS.


Introduction
The reactive power optimization problem has a significant influence on secure and economic operation of power systems. This paper presents assorted algorithms for solving optimal reactive power problem. Symbiosis modeling (SM), which extends the dynamics of the canonical PSO algorithm by adding a significant ingredient that takes into account the symbiotic co evolution between species, Hybridization of Evolutionary algorithm with Conventional Algorithm (HCA) that uses the abilities of evolutionary and conventional algorithm and Genetical Swarm Optimization (GS), which combines Genetic Algorithms (GA) and Particle Swarm Optimization (PSO).The dynamics of each species in the ecosystem is manipulated by extending the dynamics of the canonical PSO model. We extend the control law of the canonical PSO model by adding a significant ingredient, which takes into account the symbiotic co evolution between species & called as symbiosis modeling (SM). Since mass extinction can be a natural feature of the ecosystem's dynamics and sometimes is the result of the co evolution of species, species extinction and speciation are also simulated in our model. This model is instantiated as a novel multi-species optimizer; since this proposed algorithm contains two hierarchies and is based on the PSO model. The description of the biological details of the symbiosis theory and mass extinction theory in this paper were taken from [1][2][3][4][5][6]. The slave swarms execute the canonical PSO algorithm or its variants independently to maintain thediversity of particles, while the particles in the master swarm enhance themselves based on their own knowledge and also the knowledge of the particles in the slave swarms. HCA method is based on a new combination of conventional and evolutionary algorithms for sweeping the search space. In the former works the evolutionary algorithm has been used to scanning the search space in first, then a conventional algorithm has been employed such that the obtained swarm has used as initial guess by it. This method causes the abilities of the two algorithms to be used separately. The new hybrid technique here proposed, called Genetical Swarm Optimization (GS) and consists in a strong co-operation of GA and PSO, since it maintains the integration of the two techniques for the entire run of simulation. In each iteration, in fact, some of the individuals are substituted by new generated ones by means of GA, while the remaining part is the same of the previous generation but moved on the solution space by PSO. Doing so, the problem of premature convergence of the best individuals of the population to a local optimum, one of the most known drawbacks found in tests of hybrid global-local strategies, has been cancelled. All the three SM, HCA, GS is applied to Reactive Power optimization problem and has been evaluated in standard IEEE 30 System. The results shows that all the three algorithms perform well in solving the reactive power problem with rapid convergence rate .Of all the three algorithms SM has the slight edge in reducing the real power loss over HCA&GS.

Problem Formulation
The objective of the reactive power optimization problem is to minimize the active power loss in the transmission Network as well as to improve the voltage profile of the system. Adjusting reactive power controllers like Generator bus voltages, reactive Power of VAR sources and transformer taps performs reactive Power scheduling. (1)

Subject to
i) The control vector constraints ii) The dependent vector constraints iii) The power flow constraint F(X, Y,) = 0 (4) ; = 1,2. . , ,  = penalty factors P n L = total real power losses of the n-th particle

Symbiosis Modeling (SM)
Symbiosis, initially defined by Anton de Bary in 1879, is simply the living together of organisms from different species. Here, we denote symbiosis as relationships that are constant and intimate between dissimilar species. Symbiosis includes three classify types: mutualism, commensalism and parasitism. In detail, mutualism describes a relationship in which all organisms involved derive benefits; commensalism result in only one species benefits without apparent benefit or cost to the other members of the association; and parasitism is a relationship in which one organism benefits at the cost to the other members. In a sense, what becomes interesting about symbiosis is not so much that organisms live together, but they cooperate. Over the years, symbiosis is used to refer to the special case of mutualism. Ultimately, what is of interest is not mutualism, but the cooperation that enable dissimilar species to intimately associate with each other over evolutionary significant durations. Symbiosis is almost ubiquitous in nature. There are practically no plants or animals free of symbionts (organisms in symbiotic relationship) living on or in them. Research shows different types of symbiotic interactions. Some involve internal interactions, like bacteria in human intestines. Some of these interactions have seemed to lead to the evolution of organisms (e.g., the eukaryote cells, from which all plants and animals are descended, have symbiotic origin). Others appear to be purely behavioral, as in a human-honey guide mutualism that discussed above. Currently, enlightened evolutionary theory recognizes symbiosis as an integral process, and a fundamental source of innovation in evolution.

Mass Extinction
Species extinction has played a significant role in the history of life on Earth. Of the estimated one to four billion species have existed on the Earth, while less than 50 million are still alive today and all the others became extinct. There are two primary colleges of thought about the causes of extinction. The traditional view, still held by most paleontologists as well as many in other disciplines, is that extinction is the result of external stresses imposed on the ecosystem by the environment.At the other end of the scale, an increasing number of biologists and ecologists are supporting the idea that extinction has biotic causesthat extinction is a natural part of the dynamics of ecosystems and would take place regardless of any stresses arising from the environment. There is evidence in favour of this viewpoint that extinction can be the result of the interactions and dependencies between species. For example, in symbiotic mutualism or food web interactions context, the extinction of a specific species can cause the extinction of others. There are two best known model of species extinction [10]: the Bak-Sneppen model, which attempts to explain mass extinction as a result of species interactions; the extinction model of Newman, which models extinction as the result of environmental influences on species.

Canonical PSO Model
One of the best developed SI systems is Particle Swarm Optimization (PSO), which is a kind of swarm intelligence inspired by emergent collective intelligence of social model (e.g., bird flocks and fish shoals). The canonical PSO (CPSO) model evolves a single population of interacting particles, moving around in the D dimensional problem space searching for the optimum. Each particle is represented as ⃗⃗⃗ = ( 1 , 2 , . . , ) and records its previous best position represented as Pi = (Pi1, Pi2,. . ., PiD), which is also called pbest. The index of the best particle among all the particles in the population is represented by the symbol g, and pg is called gbest. In the canonical PSO model, each particle is accelerated by two elastic forces, i.e., one attracts it to pbest and the other to gbest. The magnitude of the force is randomly chosen at each time step. At each generation, the equation controlling the particles is of the form: Where c1 and c2 are two learning rates that control, respectively, the proportion of social transmission and individual learning in the swarm and r1, r2 are two random vectors uniformly distributed in [0, 1].
The velocity of particle i, and its position updated every generation using the equations: Where is known as the constriction coefficient [7][8][9][10][11][12][13][14][15].However, studies showed that the canonical PSO (or original PSO algorithm) has difficulties in controlling the balance between exploration (global investigation of the search place) and exploitation (the fine search around a local optimum). The CPSO algorithm performs well in the early iterations (i.e., quickly converging towards an optimum in the first period of iterations), while has problems reaching a near optimal solution in some function optimization problems.

Symbiosis Co Evolution Model for Optimization
In this section, we describe our model for the co evolution of symbiotic species and formulate it as an optimization algorithm. We present the outline of our model by making the following assumptions: 1) Within species, species members cooperate with each other and rely on the presence of other species members for survival. 2) Between species, symbiotic partners from distinct species cooperate with each other and all partners gain an advantage to increase their survival ability. 3) Cooperation both within and between species are obligate through the whole life cycles of all species. 4) All species feel the same external environmental stress. Some species will become extinct if the stress is severe enough. 5) If one or more species went extinct, they will be replaced with equal number of new ones.
Thus the number of species remains constant. These assumptions yield a model that can be instantiated as the optimization algorithm, namely PS 2 O, which present below: Here the basic goals is to find the minimum of f( ), ∈ . We create an ecosystem contains a species set = {S1, S2 ,. . .,Sn}, and each species possesses a members set = { 1 , 2 , . . , } mg i.e., totally n × m individuals co evolve in the ecosystem. The ith member of the kth species is characterized by the ⃗⃗⃗⃗ = ( 1 , 2 , . . , ). Suppose ( ⃗⃗⃗⃗ ) is the fitness of and lower value of the fitness represents the higher ability of survival. Under this presumed external environmental stress, all individuals in our model co evolve to the states of lower and lower fitness by cooperating each other both within species and between species. In each generation t, each individual behaves as follow: (a) Social evolution: this process addresses the cooperation between individuals of the same species. Due to the sociobiological background of the canonical PSO model, evolve according to the rules of the canonical PSO algorithm in this process. Within the kth species, one or more members in the neighbourhood of contribute their knowledge to and also share its knowledge with its neighbours. Then accelerate towards the personal best position and the best position found by its neighbours: Where k is the index of the species that the belongs to, is the acceleration vector of , is the personal best position found so far by , is the best position found so far by its neighbours within species k, c1 are the individual learning rates, c2 are the social learning rate; and r1,r2∈ R d are two random vectors uniformly distributed in [0, 1]. (b) Symbiotic evolution: this process addresses the cooperation between individuals of distinct species. beneficially interacts with and rewards all its symbiotic partners (individuals of dissimilar species), i.e., each symbiotic partner donates its knowledge to aid other partners. Then accelerate towards its symbiotic partner of the best fitness: Where l is the index of the species which the best symbiotic partner belongs to, c3 is the ''symbiotic learning rate", r3∈ R d is a uniform random sequence in the range [0, 1], and is the best position found so far by the symbiotic partners of .

Basic PSO Model
Particle swarm optimization (PSO) is originally attributed to Kennedy and Eberhart [16], based on the social behaviour of collection of animal such as birds flocking and fish schooling. In PSO algorithm each individual of the swarm, be called particle, remembers the best solution found by itself and by the whole swarm along the search trajectory. The particles move along the search space and exchange information with other particle according to the following equation.
Where Xid represent the current position of the particle, Pid is the best remembered individual particle position, Pgd denote the best remembered swarm position. c1and c2 are cognitive and social parameters. r1 and r2 are random numbers between 0 and 1 and w is inertia weight which is used to balance the global and local search abilities. A large inertia weight facilitates global search while a small inertia weight facilitates local search. A decreasing function for the dynamic inertia weight can be devised in the following, Where winitial and wfinal represent the initial and final inertia weights respectively at the start of a given run, itermax the maximum number of iterations in a offered run, and itermax the current iteration number at the present time step. However, global search ability at the end of the run may be inadequate due to the utilization of a linearly decreasing inertia weight. The PSO may fail to find the required optimal in cases when the problem is too complicated. But to some extent, this can be overcome by employing a self-adapting strategy for adjusting the acceleration coefficients. Sug an than [17] applied the optimizing method that make the two factors decrease linearly with the increase of iteration numbers, but the results were not as good as the fixed value 2 of c1 and c2Ratnaweera [18] improve the convergence of particles to the global optima based on the way that make c1 decrease and c2 increase linearly with the increase of iteration numbers. The c1 and c2 is given by the following equations;

Steepest Descent Algorithm
The method of steepest descent (SD) is the simplest of the gradient methods. Imagine that there's a function F(x), which can be defined and differentiable within a given boundary, so the direction it decreases the fastest would be the negative gradient of F(x) .To find the local minimum of F(x) , the method of SD is employed, where it uses a zig-zag like path from an arbitrary point and gradually slide down the gradient, until it converges to the actual point of minimum. Although this optimizing method is less time consuming than the population based search algorithms, it is highly dependent on the initial estimate of solution. This method can be describe by the following formula, Where xk+1 represent the position of new solution than xk,g(xk) is gradient of F(x) in xk. αk is a scalar that should decrease in each iteration. Near the optimum point, lower range of αk make more little movement in xk and so, a more precise search.

The New Hybrid Evolutionary-Conventional Algorithm (HCA)
This method is based on a new combination of conventional and evolutionary algorithms for sweeping the search space. In the former works the evolutionary algorithm has been used to scanning the search space in first, then a conventional algorithm has been employed such that the obtained swarm has used as initial guess by it. This method cause the abilities of the two algorithms be used separately. So we cannot guarantee a better result because the proper iteration that we should changed the algorithms cannot be determined. On the other hand when we separately use these two algorithms we cannot gain their abilities in the same time. In the proposed method in each iteration after using the evolutionary algorithm, the swarm be employed in the conventional algorithm for a deterministic time. So in each iteration the main abilities such as randomly search and a more precisely search near the optimum point can be satisfied. When the number of iterations increased and the proposed algorithm approach toa global extremum, SD algorithm can be used (HECA) to a more precise search near the global extremum. Because near the global extremum there is no need to a randomly search. Step1. Specify the number of times which steepest descent algorithm should be applied in each iteration of PSO algorithm. This number can be a non-integer number. This means that PSO algorithm applies for a specified number in each iteration of steepest algorithm.
Step2. Determine the PSO parameters, like number of particle, number of iteration, cognitive and social parameter and so on.
Step3. Initialize the positions and velocities of particles randomly in search space.
Step4. Evaluate each particle fitness value. Calculate Pid and Pgd based on their definitions.
Step5. Based on the number which is specified in step1, move the particles through the search space by PSO and SD algorithms in each iteration. The positions and velocities of all particles are updated according to equations (18), (19), (22) and (23), and then a group of new particles are generated.
Step6. If the maximal iterative of generation are arrived go to step 8else go to step 7.
Step7. Calculate the new amount of ,c1 and c2 according to equations (20) and (21) and go to step 4.
Step8. Use the obtained swarm as initial guess of SD algorithm and move the particles by SD algorithm for specified times.

Genetical Swarm Optimization (GS)
Genetic Algorithms [16], [17] simulate the natural evolution, in terms of survival of the fittest, adopting pseudo-biological operators such as selection, crossover and mutation. Selection is the process by which the most highly rated individuals in the current generation are chosen to be involved as parents in the creation of a new generation. The crossover operator produces two new individuals by recombining the information from two parents. The random mutation of some gene values in an individual is the third GA operator. One of the most recently developed evolutionary techniques is instead represented by the Particle Swarm Optimization [18]- [20], that is based on a model of social interaction between independent agents and uses social knowledge or swarm intelligence in order to find the global maximum or minimum of a function. A brief introduction of PSO algorithm is given in this section. PSO uses social rules to search in the parameter space by controlling the trajectories of a set of independent particles. The position of each particle, representing a particular solution of the problem, is used to compute the value of the fitness function to be optimized. Each particle may change its position, and consequently may explore the solution space, simply varying its associated velocity. The main PSO operator is, in fact, velocity update, that takes into account the best position, in terms of fitness value, reached by all the particles during their paths, G, and the best position that the agent itself has reached during its search, Pi, resulting in a migration of the entire swarm towards the global optimum. At each iteration the particle moves around according to its velocity and position; the cost function to be optimized is evaluated for each particle in order to rank the current location. The velocity of the particle is then stochastically updated, according to the next formula: The term ω < 1 is known as the "inertial weight" and it is a friction factor chosen between 0 and 1 in order to determine to what extent the particle remains along its original course unaffected by the pull of the other two terms; it is very important to prevent oscillations around the optimal value. The other two terms are known as self-knowledge and social knowledge and they are balanced by the scaling factors c1,2 that are constants and by ϕ1,2 that are random, positive, numbers with a uniform distribution and a value that goes from 0 to 1.Some comparisons of the performances of GA and PSO are present in literature [19][20][21] , underling the reliability and convergence speed of both methods, but continuing in keeping them separate. In particular PSO have faster convergence rate than GA early in the run, but often they are outperformed by GA for long simulation runs, when the last ones find a better solution. Anyway, the population-based representation of the parameters that characterizes a particular solution is the same for both the algorithms; therefore it is possible to implement a hybrid technique in order to utilize the qualities and uniqueness of the two algorithms. Some attempts have been done in this direction with good results, but with weak integration of the two strategies, because one algorithm is used mainly as the pre-optimizer for the initial population of the other one. The new hybrid technique here proposed, called Genetical Swarm Optimization, and consists in a strong co-operation of GA and PSO, since it maintains the integration of the two techniques for the entire run. In each iteration, in fact, the population is divided into two parts and it is evolved with the two techniques respectively. The two parts are then recombined in the updated population, that is again divided randomly into another two parts in the next iteration for another run of genetic or particle swarm operators.
GS Algorithm for solving the optimal reactive power dispatch problem.
Step 1. Initialization the population Step 2. Randomly population has been selected Step 3. Fitness has been calculated for all individuals.
Step 4. Split the population for GA and PSO Step 5.In GA i. Selective reproduction, ii. Crossover, iii. Mutation has been done Step 6. In PSO i. Velocity updating, ii. Calculation of new positions, iii. Updating (personal& global) has been done.
Step 7 .Resulting new population in outcome of combining both the PSO & GA.
Step 8. If best individuals reached means it will stop or go to step 2 .

Conclusion
In this paper proposed SM, HCA, GS algorithms has been successfully solved reactive power problem. The performance of the proposed algorithms demonstrated through its evaluation on standard IEEE 30 bus power systems. The results shows that all the three algorithms perform well in solving the reactive power problem with rapid convergence rate .Of all the three algorithms SM has the slight edge in reducing the real power loss over HCA &GS.