NN Backpropagation

The case we are handling: a 2 layers network

The above diagram shows the network to be used. From the last blog we get the loss function:

In order to use the gradient descent algorithm to train $\scriptsize W_1$ and $\scriptsize W_2$, we need to compute the derivative of $\scriptsize\mathscr{L}$ to $\scriptsize W_1$ and $\scriptsize W_2$, which are:

If we have $\scriptsize \frac{d\mathscr{L}}{dW_2}$ by hand, then $\scriptsize W_2$ can be trained using gradient descent:

Compute $\frac{d\mathscr{L}}{dW_2}$

How to compute $\scriptsize \frac{d\mathscr{L}}{dW_2}$? We use the chain rule to “propagate back” the gradient, for $\scriptsize W_2$:

As the last blog described, $\scriptsize \frac{d\mathscr{L}}{df}$ of the $\scriptsize ith$ sample can be expressed as:

considering $\scriptsize f=W_2\cdot X_2+b_2$:

then:

Compute $\frac{d\mathscr{L}}{dW_1}$

Now we will compute $\scriptsize \frac{d\mathscr{L}}{dW_1}$ to train $\scriptsize W_1$. The gradient will be propagated back to $\scriptsize W_1$ like this:

the same as before, we know that $\scriptsize Y_1 = W_1\cdot X_1 + b_1$, the above equation results in:

$\scriptsize \frac{d\mathscr{L}}{df}$ is already known, $\scriptsize \frac{dX_2}{dY_1}$ depends on the activation function of the hidden layer.

Using this method, we can easily propagate the gradient back to the input layer through the whole network and update all the layers, no matter how many layers are there in between.

-->

Wangxin

I am algorithm engineer focused in computer vision, I know it will be more elegant to shut up and show my code, but I simply can't stop myself learning and explaining new things ...

SVD, PCA and Least Square Problem

NN Softmax loss function

Published on March 09, 2017

NN Dropout

Published on January 08, 2017