They are "dropped-out" randomly. This means that their contribution to the activation of downstream neurons is temporarily removed on the Forward propagation and any weight updates are also not applied to the Artificial Neurons on the Backpropagation.
As an Artificial Neural network learns, neuron weights settle into their context within the Artificial Neural network. Weights of neurons are tuned for specific features providing some specialization. Neighboring neurons become to rely on this specialization, which if taken too far can result in a fragile Machine Learning model too specialized to the Training dataset which is referred to as Overfitting. This reliant on context for a neuron during training is referred to complex co-adaptations.
You can imagine that if Artificial Neurons are randomly dropped out of the network during training, that other Artificial Neurons will have to step in and handle the representation required to make predictions for the missing Artificial Neurons. This is believed to result in multiple independent internal representations being learned by the network.
More Information#There might be more information for this subject on one of the following:
- [#1] - Dropout Regularization in Deep Learning Models With Keras - based on information obtained 2018-01-03-