jspωiki
Loss function

## Overview#

Loss function measures the discrepancy between the prediction (𝑦̂(𝑖)) and the desired output (𝑦(𝑖)).

In other words, the Loss function computes the error for a single Training dataset example. Loss function in mathematical optimization, statistics, econometrics, decision theory, Machine Learning and computational neuroscience, or cost function is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost" associated with the event. An optimization problem seeks to minimize a Loss function. An objective function is either a Loss function or its negative (in specific contexts, variously called a reward function, a profit function, a utility function, a fitness function, etc.), in which case it is to be maximized.

There might be more information for this subject on one of the following:

### Loss function statistics#

Loss function is used for parameter estimation, and the event in question is some function of the difference between estimated and true values for an instance of data.

In classification, Loss function is the penalty for an incorrect classification of an example. In actuarial science, it is used in an insurance context to model benefits paid over premiums, particularly since the works of Harald Cramér in the 1920s. In optimal control the loss is the penalty for failing to achieve a desired value. In financial risk management the function is mapped to a monetary loss.

Loss function is usually a function that measures the penalty or Loss at a specific training dataset example. Some common Loss function are:

• square loss l(f(xi|θ),yi)=(f(xi|θ)−yi)2l(f(xi|θ),yi)=(f(xi|θ)−yi)2, used in linear regression
• hinge loss l(f(xi|θ),yi)=max(0,1−f(xi|θ)yi)l(f(xi|θ),yi)=max(0,1−f(xi|θ)yi), used in SVM
• 0/1 loss l(f(xi|θ),yi)=1⟺f(xi|θ)≠yil(f(xi|θ),yi)=1⟺f(xi|θ)≠yi, used in theoretical analysis and definition of accuracy
• Mean Squared Error (Sometimes called L2 Loss)
• MSE(θ)=1N∑Ni=1(f(xi|θ)−yi)2MSE(θ)=1N∑i=1N(f(xi|θ)−yi)2
• SVM cost function SVM(θ)=‖θ‖2+C∑Ni=1ξiSVM(θ)=‖θ‖2+C∑i=1Nξi (there are additional constraints connecting ξiξi with CC and with training set)

### Machine Learning#

Loss function for classification are computationally feasible Loss functions representing the price paid for inaccuracy of predictions in classification problems (problems of identifying which category a particular observation belongs to).

### Cost function#

The Cost function is a summation of the Loss function over the ???

### Category#

Artificial Intelligence