BACK PROPAGATION NEURAL NETWORK TECHNIQUE FOR REDUCTION OF REAL POWER LOSS

In this work particle swarm optimization algorithm has been hybridized with Back propagation neural network (PSBP) to solve the reactive power problem. Proposed PSBP methodology improves search. PSO algorithm to optimize the original weight, threshold value and when the algorithm ends, optimal point can be foundon the base of PSO algorithm; Back propagation neural network algorithm to search overall situation and then achieve the network training goal. In the particle swarm, every particle’s position represents weights set among the network during the resent iteration. In order to evaluate the proposed algorithm, it has been tested on IEEE 118 bus system and compared to other algorithms and simulation results show that proposed algorithm reduces the real power loss effectively.


Introduction
To minimize the true power loss is key aim in this reactive power optimization problem. Various techniques [1][2][3][4][5][6][7][8] have been applied to solve optimal reactive power problem. Yet many difficulties are found while solving problem due to various types of constraints. After that many Evolutionary techniques [9][10][11][12][13][14][15][16][17][18][19][20] applied to solve the reactive power problem, but many algorithms stuck in local optimal solution also failed to balance the Exploration & Exploitation during the search of global solution. In this work particle swarm optimization algorithm has been hybridized with Back propagation neural network (PSBP) to solve the reactive power problem. Proposed PSBP methodology improves search [21][22][23][24][25][26][27][28][29][30][31][32][33][34][35][36]. Improved PSO algorithm to optimize the original weight, threshold value and when the algorithm ends, optimal point can be found-on the base of PSO algorithm; Back propagation neural network algorithm to search overall situation and then achieve the network training goal. In the particle swarm, every particle's position represents weights set among the network during the resent iteration. In order to evaluate the proposed algorithm, it has been tested on IEEE 118 bus system and compared to other algorithms and simulation results show that proposed algorithm reduces the real power loss effectively.

A. Real Power Loss
Objective of the problem is to reduce the true power loss: B. Amplification of Voltage Profile Voltage deviation given as follows: Voltage deviation given by:

Hybridized Algorithm
The optimal algorithm of improved particle swarm is not dependent of fields of problems. However, it uses the code of decision variable as operation object and adaption function as searching objects. Furthermore, it can use the information from various searching points. It applies to solve the problem about nonlinearity and non-differentiable function and multiple objectives.
The "I" particle represent a D-dimensional vector, = ( 1 , 2 , . . )It means that the "i" particle represents its position in this space. Every position of particle "X" is a potential solution. If we put "x" into objective function, we can know the adaptive value. We can know whether the "x" is the optimal answer based on the adaptive value. The speed of particle is also a D-dimensional, it also recorded as = ( 1 , 2 , . . ) .We record the particle I to the h times, the optimal position was = ( 1 , 2 , . . ). All the particles to the h times, the optimal position was = ( 1 , 2 , . . ). The basic formulas are as follows: Where, & : Speeding coefficient, adjusting the maxim step length that flying the best particle in whole situation and the individual best particle respectively. Appropriate c1 and c2 speed up the convergence and avoid falling into partial optimality r1&r2: Random number between 0 and 1, for controlling the weight of speed W: Inertia factor. It was oriented toward overall searching. We usually take the original value as 0.9 and make it to 0.1 with the addition and reduction of the times of iteration. It mainly used to total searching, making the searching space converge to a certain space. LDW (Linearly Decreasing Inertia Weight) is given by, Where, & : The maximum and minimum value of W, t: The step of iteration , : The maximum iteration step.
In PSO method of nonlinear variation weight with momentum to improve this method and given by 2 Ѳ is momentum, when in θ = t/t max , t is smaller, 2 Ѳ is near to 1 and w is near to ., it ensure the ability of total searching. With the increasing of t, w reduces in non linearity, ensuring the searching ability in partial areas. In the later period (t = t max ), avoiding the problems caused by the decrease of w. That is, the reduction ability of total searching and the decline of variety.
The node action function of Back propagation neural network is generally "S" function. Common activation function f (x) is derivable sigmoid function: Error function R is: In this formula, is expected out is actual output n is sample length.
The uniform expression of weight modified formula is: Step 1: Select n samples as a training set.
Step 2: Initialize weight and biases value in neural network. The initialized values are always random numbers between (-1, 1). Every sample in the training set needs the following processing: Step 3: According to the size of every connection weight, the data of input layer are weighted and input into the activation function of hidden layer and then new values are obtained. According to the size of every connection weight, the new values are weighted and input into the activation function of output layer and the output results of output layer are calculated.
Step 4: If there exists error between output result and desired result, the calculation training is wrong.
Step 5: Adjust weight and biases value.
Step 6: According to new weight and biases values, the output layer is calculated. The calculation doesn't stop until the training set meets the stopping condition.
The tangible hybridization process can be narrated as follows: Step 1: Initialization: is the number of neurons in the hidden layer no representing the number of neurons in input layer. So, the dimension of particle swarm D is: Step 2: Setting fitness function of particle swarm' in this text, we choose mean square error in BP Neural Network as fitness function of particle swarm: : The output in theory based on sample K ̅̅̅̅: The virtual output based on sample K M: The number of Neural Network Step 3: Using the improved particle swarm algorithm to optimize the weight and the threshold value of BP network.
Step 4: Coming to the optimal weight and the threshold value; as follows: = [ ℎ 1 , ℎ 2 , . . , ℎ ℎ , 1 , 2 , . . , , ℎ 1 , ℎ 2 , . . , ℎ × ℎ , ℎ 1 , ℎ 2 , . . , ℎ ℎ × ] ℎ Step 5: Letting the optimal weight and the threshold value as the original weight and the threshold value of BP network and then put them into Neural Network for training. Adjusting the weight and the threshold value based on BP algorithm until the function index of the network's Mean Square Error (MSE) <e. e is the preset expected index.

Simulation Results
In IEEE 118 bus system [37] is used as test system to validate the performance of the proposed algorithm. Table 1 shows limit values.  Table 2 show the comparison of results.

Conclusion
In this paper a Proposed PSBP methodology successfully solved the reactive power problem. PSO algorithm to optimize the original weight, threshold value and when the algorithm ends, optimal point can be found-on the base of PSO algorithm; Back propagation neural network algorithm to search overall situation and then achieve the network training goal. In order to evaluate the proposed algorithm, it has been tested on IEEE 118 bus system and compared to other algorithms and simulation results show that proposed algorithm reduces the real power loss effectively.