## Overview [1]#

K-Nearest Neighbor (KNN) is a Supervised Learning, non-parametric method used in Machine Learning for classification and regression

K-Nearest Neighbor in both cases, the input consists of the k closest Training dataset in the feature space. The output depends on whether k-NN is used for classification or regression:

- K-Nearest Neighbor classification, the output is a Classification membership. An object is classified by a
**majority vote of its neighbors**, with the object being assigned to the class most common among its k nearest neighbors (k is a positive integer, typically small). If k = 1, then the object is simply assigned to the class of that single nearest neighbor. - K-Nearest Neighbor regression, the output is the property value for the object. This value is the
**average of the values**of its k nearest neighbors.

Both for classification and regression, a useful technique can be to assign weight to the contributions of the neighbors, so that the nearer neighbors contribute more to the average than the more distant ones. For example, a common weighting scheme consists in giving each neighbor a weight of 1/d, where d is the distance to the neighbor.

The neighbors are taken from a set of objects for which the class (for k-NN classification) or the object property value (for k-NN regression) is known. This can be thought of as the training set for the algorithm, though no explicit training step is required.

A peculiarity of the K-Nearest Neighbor algorithm is that it is sensitive to the local structure of the data.

### More Information#

There might be more information for this subject on one of the following:- [#1] - K-nearest_neighbors_algorithm - based on information obtained 2018-04-11-