Overview#

Backpropagation is a method used in Artificial Neural networks to calculate the error contribution of each neuron after a batch of data (in image recognition, multiple images) is processed.

Backpropagation is a special case of an older and more general technique called automatic differentiation. In the context of learning, Backpropagation is commonly used by the gradient descent optimization algorithm to adjust the weight of neurons by calculating the gradient of the loss function. This technique is also sometimes called Backpropagation of errors, because the error is calculated at the output and distributed back through the network layers.

Backpropagation algorithm has been repeatedly rediscovered and is equivalent to automatic differentiation in reverse accumulation mode.

Backpropagation requires a known, desired output for each input value—it is therefore considered to be a Supervised Learning method (although it is used in some unsupervised networks such as autoencoders).

Backpropagation is also a generalization of the delta rule to multi-layered Feedforward Neural networks, made possible by using the chain rule to iteratively compute gradients for each layer. Backpropagation is closely related to the Gauss–Newton algorithm, and is part of continuing research in neural Backpropagation.

Backpropagation can be used with any gradient-based optimizer, such as L-BFGS or truncated Newton.

Backpropagation is commonly used to train Deep Learning Artificial Neural networks with more than one hidden node.

Backpropagation to find Gradient descent in Python#

dw = 1/m*np.dot(X,(A-Y).T)
db = 1/m*np.sum(A-Y)

Category#

Artificial Intelligence

More Information#

There might be more information for this subject on one of the following:

Add new attachment

Only authorized users are allowed to upload new attachments.
« This page (revision-7) was last changed on 06-Dec-2017 10:42 by jim