The idea behind PCA

In the field of machine learning, PCA is used to reduce the dimension of features. Usually we collect a lot of feature to feed the machine learning model, we believe that more features provides more information and will lead to better result.

But some of the features doesn’t really bring new information and they are correlated to some other features. PCA is then introduced to remove this correlation by approximating the original data in its subspace, some of the features of the original data may be correlated to each other in its original space, but this correlation doesn’t exit in its subspace approximation.

Visually, let’s assume that our original data points \(x_1...x_m\) have 2 features and they can be visualized in a 2D space:

Continue Reading ...

The case we are handling: a 2 layers network

2layer_nn_bpp

Continue Reading ...

Background:the network and symbols

Firstly the network architecture will be described as:

2layer_nn

Continue Reading ...

Dropout regularization

Dropout is a commonly used regularization method, it can be described by the diagram below: only part of the neurons in the whole network are updated. Mathematically, we apply some possibility \(p\)(we use 0.5) to a neuron to keep it active or keep it asleep:

dropout

Continue Reading ...

Weight Initialization

All-Zero Initialization

It is easy to think that we set all the weights to be zero, but it’s terribly wrong, cause using all zero initialization will make the neurons all the same during the backpropagation update. We don’t need so many identical neurons. Actually, this problem always exists if the weights are initialized to be the same.

Small random values

One guess to solve the problem of all-zero initialization is setting the weights to be small random values, such as \(W=0.01*np.random.randn(D,H) ​\) . It is also problematic because very small weights cause very small updates and the update values become smaller and smaller during the backpropagation. In the deep network, this problem is very serious as you may find that the upper layers never update.

Continue Reading ...

Structural SVM is a variation of SVM, hereafter to be refered as SSVM

Special prediction function of SSVM

Firstly let’s recall the normal SVM’s prediction function:

\[f(x)=sgn((ω\cdot x)+b)\]

ω is the weight vector,x is the input,b is the bias,\(sgn\) is sign function,\(f(x)\) is the prediction result.

On of SSVM’s specialties is its prediction function:

\[f_ω (x)=argmax_{y∈Υ} [ω\cdot Φ(x,y)]\]

y is the possible prediction result,Υ is y’s searching space,and Φ is some function of x and y.Φ will be a joint feature vector describes the relationship between x and y

Then for some given \(\omega\), different prediction will be made according to different x.

Continue Reading ...

Sepp Hochreiter was graduated from Technische Universität München, LSTM was invented when he was in TU and now he is the head of Institute of Bioinformatics, Johannes Kepler University Linz.

Today he comes by Munich and gives a lecture in Fakultät Informatik.

At first, Hochreiter praised how hot is Deep Learning (to be referred as DL) around the world these days, especially LSTM, which is now used in the new version of google translate published a few days ago. The improvements DL made in the fields of vision and NLP are very impressive.

Then he starts to tell the magic of DL, taking face recognition as an example, the so-called CNN (Convolution Neuro Networks):

dl_faces

Continue Reading ...

Centering and Scaling

NN_Pre

Continue Reading ...

nn

Continue Reading ...

Simple Neuron

Neuron0

The above diagram shows a neuron in NN, it simulates a real neuron:

it has inputs: \(x_{0},x_{1}\dots x_{i}\)

it has weights for each inputs: \(\omega_{0},\omega_{1}\dots \omega_{i}\): weight vector

it has bias \(b\)

it has a threshold for the “activation function”

Continue Reading ...

Review

For the ith sample \((x_i,y_i)\) in the training set, we have the following loss function:

\[L_i= \sum_{j≠y_i}max(0,w_j^T\cdot x_i−w_{y_i}^T\cdot x_i+Δ)\]

\(w_j^T\cdot x_i\) is the score classifying \(x_i\) to class j,and \(w_{y_i}^T\cdot x_i\) is the score classifying correctly(classify to class \(y_i\)),\(\omega_i\) is the \(j\)th row of \(W\).

Problem

Problem 1:

Considering the geometrical meaning of the weight vector \(\omega\), it is easy to find out that \(\omega\) is not unique, \(\omega\) can change in a small area and result in the same \(L_i\).

Problem 2:

It the values in \(\omega\) is scaled, the loss computed will also be scaled by the same ratio. Considering a loss of 15, if we scale all the weights in \(\omega\) by 2, the loss will be scaled to 30. But this kind of scaling is meaningless, it doesn’t really represent the loss.

Continue Reading ...

Loss Function

Now we want to solve a image classification problem, for example classifying an image to be cow or cat. The machine learning algorithm will score a unclassified image according to different classes, and decide which class does this image belong to based on the score. One of the keys of the classification algorithm is designing this loss function.

Map/compute image pixels to the confidence score of each class

Assume a training set:

\[(x_i,y_i)\]

\(x_i\) is the image and \(y_i\) is the corresponding class

i∈1…N means the traning set constains N images

\(y_i\)∈1…K means there are K image categories

So a score function maps x to y:

\[f(x_i,W,b)=W\cdot x_i+b\]

In the above function, each image \(x_i\) is flattend to a 1 dimention vector

If one image’s size is 32x32 pixels with 3 channels

\(x_i\) will be a 1 dimention vector with the length of D=32x32x3=3072 Parameter matrix W has the size of [KxD], it is often called weights b of size [Kx1] is often called bias vector In this way, W is evaluating \(x_i\)’s confidence score for K categories at the same time

Continue Reading ...