Weight Initialization

**All-Zero Initialization**

It is easy to think that we set all the weights to be zero, but it’s terribly wrong, cause using all zero initialization will make the neurons all the same during the backpropagation update. We don’t need so many identical neurons. Actually, this problem always exists if the weights are initialized to be the same.

**Small random values**

One guess to solve the problem of all-zero initialization is setting the weights to be small random values, such as . It is also problematic because **very small weights cause very small updates** and the update values become smaller and smaller during the backpropagation. In the deep network, this problem is very serious as you may find that the upper layers never update.