This page (revision-1) was last changed on 29-Nov-2024 16:16 by UnknownAuthor

Only authorized users are allowed to rename pages.

Only authorized users are allowed to delete pages.

Page revision history

Version Date Modified Size Author Changes ... Change note

Page References

Incoming links Outgoing links

Version management

Difference between version and

At line 1 added 85 lines
!!! Overview
[{$pagename}] measures the discrepancy between the prediction (𝑦̂(𝑖)) and the desired output (𝑦(𝑖)).
In other words, the [{$pagename}] computes the error for a single [Training dataset] example.
[Loss function/Screen Shot 2017-12-26 at 06.51.34.png]
* 𝑦̂ is the predicted output vector. It can also be denoted a[number of layers]
* y is the truth from [Training dataset]
[{$pagename}] in mathematical optimization, statistics, econometrics, decision theory, [Machine Learning] and computational neuroscience, or cost function is a function that maps an event or values of one or more [variables] onto a real number intuitively representing some "cost" associated with the event.
An optimization problem seeks to minimize a [{$pagename}]. An objective function is either a [{$pagename}] or its negative (in specific [contexts], variously called a reward function, a profit function, a utility function, a fitness function, etc.), in which case it is to be maximized.
!! More Information
There might be more information for this subject on one of the following:
[{ReferringPagesPlugin before='*' after='\n' }]
!! [{$pagename}] statistics
[{$pagename}] is used for parameter estimation, and the event in question is some function of the difference between estimated and true values for an instance of [data].
In [classification], [{$pagename}] is the penalty for an __incorrect__ [classification] of an [example]. In actuarial science, it is used in an insurance context to model benefits paid over premiums, particularly since the works of Harald Cramér in the 1920s. In optimal control the loss is the penalty for failing to achieve a desired value. In financial risk management the function is mapped to a monetary loss.
[{$pagename}] is usually a function that measures the penalty or Loss at a specific [training dataset] [example]. Some common [{$pagename}] are:
* square loss l(f(xi|θ),yi)=(f(xi|θ)−yi)2l(f(xi|θ),yi)=(f(xi|θ)−yi)2, used in linear regression
* hinge loss l(f(xi|θ),yi)=max(0,1−f(xi|θ)yi)l(f(xi|θ),yi)=max(0,1−f(xi|θ)yi), used in SVM
* 0/1 loss l(f(xi|θ),yi)=1⟺f(xi|θ)≠yil(f(xi|θ),yi)=1⟺f(xi|θ)≠yi, used in theoretical analysis and definition of accuracy
* [Mean Squared Error] (Sometimes called L2 Loss)
** MSE(θ)=1N∑Ni=1(f(xi|θ)−yi)2MSE(θ)=1N∑i=1N(f(xi|θ)−yi)2
* SVM cost function SVM(θ)=‖θ‖2+C∑Ni=1ξiSVM(θ)=‖θ‖2+C∑i=1Nξi (there are additional constraints connecting ξiξi with CC and with training set)
!! [Machine Learning] [2]
[{$pagename}] for [classification] are computationally feasible [{$pagename}]s representing the price paid for inaccuracy of predictions in [classification] problems (problems of identifying which category a particular observation belongs to).
!! [Cost function]
The [Cost function] is a summation of the [{$pagename}] over the ???
!! Category
%%category [Artificial Intelligence]%%
!! More Information
There might be more information for this subject on one of the following:
[{ReferringPagesPlugin before='*' after='\n' }]
----
* [#1] - [Loss_function|Wikipedia:Loss_function|target='_blank'] - based on information obtained 2017-11-29-
* [#2] - [Loss functions for classification|Wikipedia:Loss_functions_for_classification|target='_blank'] - based on information obtained 2017-11-29-