This page (revision-1) was last changed on 29-Nov-2024 16:16 by UnknownAuthor

Only authorized users are allowed to rename pages.

Only authorized users are allowed to delete pages.

Page revision history

Version Date Modified Size Author Changes ... Change note

Page References

Incoming links Outgoing links

Version management

Difference between version and

At line 1 added 27 lines
!!! Overview
[{$pagename}]
!! [{$pagename}] in [Machine Learning]
[{$pagename}] in [Machine Learning] penalizes the [weight] when it is too large.For [{$pagename}] with [Logistic Regression], you try to minimize the [Cost function] add lambda (λ), which is called the [regularization parameter] which is usually determined form the development set.
[Regularization/l2-term.png]
Above shows the L2 [{$pagename}] formula and then the [{$pagename}] [Parameter] added to the [Cost function] "__J__"L2 [{$pagename}] formula is just the square Euclidean norm of the prime to [vector] w which is called L^2 [{$pagename}] which is the most common.
Two popular examples of [{$pagename}] methods for [Linear Regression] are:
* [LASSO] Regression
* [Ridge Regression]
* [Dropout Regularization]
These methods are effective to use when there is collinearity in your input values and [Ordinary Least Squares] would cause [Overfitting] the [Training dataset].
!! Misc Notes
The [bias] is generally not [{$pagename}].
In [Python] lambda is a reserved word so often it is used as lambd.
!! More Information
There might be more information for this subject on one of the following:
[{ReferringPagesPlugin before='*' after='\n' }]
----
* [#1] - [Frobenius Norm|http://mathworld.wolfram.com/FrobeniusNorm.html|target='_blank'] - based on information obtained 2018-01-03