## Overview#

Machine Learning (and statistics) the objective is**fitting**a model to a Training dataset.

Machine Learning is Inductive Learning

Machine Learning is the sub-field of computer science that, according to Arthur Samuel in 1959, gives "computers the ability to learn without being explicitly programmed."

Machine Learning can be summarised as learning a mapping function (f) that maps input variables (X) to output variables (Y).

An algorithm learns this target mapping function from the Training dataset.

The form of the mapping function is unknown and the job of Machine Learning practitioners is to evaluate different Machine Learning algorithms and see which is better at "Fitting" the underlying function. Different algorithms make different assumptions or biases about the form of the function and how it can be learned.

Machine Learning at its most basic is the practice of using algorithms to parse data, learn from it, and then make a classification about something in the world. So rather than hand-coding application routines with a specific set of instructions to accomplish a particular task, the machine is "trained" using a training dataset and algorithms that give it the ability to learn how to perform the task.[2]

In supervised machine learning an algorithm learns a Mapping function from the Training dataset.

### Machine Learning Goal#

The goal of any supervised machine learning an algorithm is to best estimate the mapping function (f) for the output variable (Y) given the input data (X). The mapping function is often called the target function because it is the function that a given supervised machine learning algorithm aims to approximate. The prediction error for any machine learning algorithm can be broken down into three parts: Bias Error Variance Error Irreducible Error The irreducible error cannot be reduced regardless of what algorithm is used. It is the error introduced from the chosen framing of the problem and may be caused by factors like unknown variables that in uence the mapping of the input variables to the output variable. In this chapter we will focus on the two parts we can in uence with our machine learning algorithms. The bias error and the variance error.### Regression, Classification, Clustering#

Machine Learning models use either:Machine Learning evolved from the study of pattern-recognition and computational learning theory in Artificial Intelligence.

Machine Learning explores the study and construction of algorithms that can learn from and make predictions on data – such algorithms overcome following strictly static program instructions by making data-driven predictions or decisions through building a model from sample inputs.

Machine Learning is employed in a range of computing tasks where designing and programming explicit algorithms with good performance is difficult or infeasible.

### Common Machine Learning Taxonomy#

Well there is almost no common Machine Learning Taxonomy.### Category#

Artificial Intelligence### More Information#

There might be more information for this subject on one of the following:- Algorithm
- Artificial Intelligence
- Artificial Neural network
- Bias
- Bias error
- Classification
- Cloud computing
- Cluster analysis
- Data-lake
- Deep Learning
- Dependent variables
- Features
- Fraud and Risk
- Function
- Hyperparameters
- Independent variable
- Learning
- Linear Regression
- Long Short Term Memory networks
- Loss function
- Mean Squared Error
- Pattern-recognition
- Recurrent Neural networks
- Regularization
- Reinforcement learning
- Supervised Learning
- TensorFlow
- Turing test
- Unsupervised Learning
- Variance error
- Web Blog_blogentry_270717_1

- [#1] - Machine_learning - based on information obtained 2017-07-28-
- [#2] - What’s the Difference Between Artificial Intelligence, Machine Learning, and Deep Learning? - based on information obtained 2017-12-10-