Cost function sum of Loss functions over your training dataset plus some model complexity penalty.

A loss function is a part of a Cost function which is a type of an objective function.

Cost function is generally represented by "J" and

The entire concept of "Training a Artificial Neural network is minimizing the Cost function Normally, you must optimise the Training dataset and the weights on the synapses as you will not control over the input data

Common Examples:#

  • Mean Squared Error: MSE(θ)=1N∑Ni=1(f(xi|θ)−yi)2MSE(θ)=1N∑i=1N(f(xi|θ)−yi)2
  • SVM cost function: SVM(θ)=‖θ‖2+C∑Ni=1ξiSVM(θ)=‖θ‖2+C∑i=1Nξi
    (there are additional constraints connecting ξiξi with CC and with Training dataset)
  • J = ∑1/2(y-yHat)exp(2)

More Information#

There might be more information for this subject on one of the following:

Add new attachment

Only authorized users are allowed to upload new attachments.
« This page (revision-6) was last changed on 06-Dec-2017 15:42 by jim