jspωiki
Dropout Regularization

Overview [1]#

Dropout Regularization is a Regularization method where randomly selected Artificial Neurons are ignored during training.

They are "dropped-out" randomly. This means that their contribution to the activation of downstream neurons is temporarily removed on the Forward propagation and any weight updates are also not applied to the Artificial Neurons on the Backpropagation.

As an Artificial Neural network learns, neuron weights settle into their context within the Artificial Neural network. Weights of neurons are tuned for specific features providing some specialization. Neighboring neurons become to rely on this specialization, which if taken too far can result in a fragile Machine Learning model too specialized to the Training dataset which is referred to as Overfitting. This reliant on context for a neuron during training is referred to complex co-adaptations.

You can imagine that if Artificial Neurons are randomly dropped out of the network during training, that other Artificial Neurons will have to step in and handle the representation required to make predictions for the missing Artificial Neurons. This is believed to result in multiple independent internal representations being learned by the network.

The effect is that the network becomes less sensitive to the specific weights of neurons. This in turn results in a network that is capable of better generalization and is less likely to Overfitting.

More Information#

There might be more information for this subject on one of the following: