Back propagation
Lets study back propagation using an example………..
Consider a neural network with two inputs, two hidden neurons, two output neurons
The training of a neural network starts after assigning (manually or randomly) the weights ,biases and training inputs and outputs.Let us assume that we have the initialized with weights ,biases and I/O as shown below
The process of back propagation is used to optimize the weights so that the neural network can learn how to correctly map arbitrary inputs to outputs.our target is to find the weights and biases ,such that for example ,for given inputs 0.05 and 0.10, we want the neural network to output 0.01 and 0.99.During Forward pass net input for h1 is given by net_h1=w1*i1+w2*i2+b1*1=0.15*0.05+0.2*0.1+0.35*1=0.3775. The input is passed through the logistic function 1/(1+e power (-net_h1)) therefore out_h1=1/(1+e power(-0.3775))=0.5932. similarly output of the bottom neuron out_h2=0.5968.This process is repeated for the output layer ,for example net_o1=w5*out_h1 +w6*out_h2 +b2*1=1.101 .Then finally ,out_o1=1/(1+e **(-net_o1))=0.7513.Similarly out_o1=0.7729. Total error is given by:E_total=sum(1/2 *(target-output)**2) ,there fore the error in the o1 is given Eo1=1/2 *(target_o1-out_o1)**2=0.27 and Eo2=0.023,finally total error E_total=0.27+0.023=0.293.
Now the time comes for going backward that is back propagation
Lets calculate ,how much of change in w5 affects the total error,which is obtained by calculating the partial derivative of E_total with respect to w5 .
This value is used to update the w5
This procedure is repeated into the hidden layer ,for example
finally
similarly,all the weights are updated ,again the process will repeat till we get the local minimum for the error.
For more details go to the site below: