!!! Overview [{$pagename}] (and statistics) the objective is __fitting__ a [model] to a [Training dataset]. [{$pagename}] is [Inductive Learning] [{$pagename}] is the sub-field of computer science that, according to Arthur Samuel in [1959|Year 1959], gives "computers the ability to [learn] without being explicitly programmed." [{$pagename}] can be summarised as [learning] a [mapping function] (f) that maps input [variables] (X) to output [variables] (Y). An [algorithm] learns this target [mapping function] from the [Training dataset]. The form of the mapping [function] is unknown and the job of [{$pagename}] practitioners is to evaluate different [{$pagename}] [algorithms] and see which is better at "Fitting" the underlying [function]. Different [algorithms] make different assumptions or [biases|Bias] about the form of the [function] and how it can be learned. [{$pagename}] at its most basic is the practice of using [algorithms] to parse [data], [learn] from it, and then make a [classification] about something in the world. So rather than hand-coding [application] routines with a specific set of instructions to accomplish a particular task, the machine is "trained" using a [training dataset] and [algorithms] that give it the ability to learn how to perform the task.[2] In [supervised machine learning|Supervised Learning] an [algorithm] learns a [Mapping function] from the [Training dataset]. !! [{$pagename}] Goal The goal of any [supervised machine learning|Supervised Learning] an [algorithm] is to best estimate the mapping function (f) for the output variable (Y) given the input data (X). The mapping function is often called the target function because it is the function that a given supervised machine learning algorithm aims to approximate. The prediction error for any machine learning algorithm can be broken down into three parts: Bias Error Variance Error Irreducible Error The irreducible error cannot be reduced regardless of what algorithm is used. It is the error introduced from the chosen framing of the problem and may be caused by factors like unknown variables that in uence the mapping of the input variables to the output variable. In this chapter we will focus on the two parts we can in uence with our machine learning algorithms. The bias error and the variance error. !! [Regression|Logistic Regression], [Classification], [Clustering|Cluster analysis] [{$pagename}] models use either: * [Supervised Learning] ** [Logistic Regression] ** [Classification] * [Unsupervised Learning] ** [Cluster analysis] %%information We see very little different in [{$pagename}] and [Artificial Intelligence] %% [{$pagename}] evolved from the study of [pattern-recognition] and computational learning theory in [Artificial Intelligence]. [{$pagename}] explores the study and construction of [algorithms] that can [learn] from and make predictions on [data] – such [algorithms] overcome following strictly static program instructions by making data-driven predictions or decisions through building a model from sample inputs. [{$pagename}] is employed in a range of computing tasks where designing and programming explicit algorithms with good performance is difficult or infeasible. !! Common [Machine Learning Taxonomy] Well there is almost no common [Machine Learning Taxonomy]. !! Category %%category [Artificial Intelligence]%% !! More Information There might be more information for this subject on one of the following: [{ReferringPagesPlugin before='*' after='\n' }] ---- * [#1] - [Machine_learning|Wikipedia:Machine_learning|target='_blank'] - based on information obtained 2017-07-28- * [#2] - [What’s the Difference Between Artificial Intelligence, Machine Learning, and Deep Learning?|https://blogs.nvidia.com/blog/2016/07/29/whats-difference-artificial-intelligence-machine-learning-deep-learning-ai/|target='_blank'] - based on information obtained 2017-12-10-