https://github.com/jayantgoel001/knn
K-Nearest Neigbour
https://github.com/jayantgoel001/knn
Last synced: 5 months ago
JSON representation
K-Nearest Neigbour
- Host: GitHub
- URL: https://github.com/jayantgoel001/knn
- Owner: JayantGoel001
- Created: 2020-01-18T20:30:27.000Z (over 5 years ago)
- Default Branch: master
- Last Pushed: 2020-06-06T03:01:45.000Z (over 5 years ago)
- Last Synced: 2025-04-07T03:36:12.148Z (6 months ago)
- Language: Jupyter Notebook
- Size: 472 KB
- Stars: 7
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# KNN
In pattern recognition, the k-nearest neighbors algorithm (k-NN) is a non-parametric method used for classification and regression.In both cases, the input consists of the k closest training examples in the feature space. The output depends on whether k-NN is used for classification or regression:
In k-NN classification, the output is a class membership. An object is classified by a plurality vote of its neighbors, with the object being assigned to the class most common among its k nearest neighbors (k is a positive integer, typically small). If k = 1, then the object is simply assigned to the class of that single nearest neighbor.
In k-NN regression, the output is the property value for the object. This value is the average of the values of k nearest neighbors.
k-NN is a type of instance-based learning, or lazy learning, where the function is only approximated locally and all computation is deferred until function evaluation.Both for classification and regression, a useful technique can be to assign weights to the contributions of the neighbors, so that the nearer neighbors contribute more to the average than the more distant ones. For example, a common weighting scheme consists in giving each neighbor a weight of 1/d, where d is the distance to the neighbor.
The neighbors are taken from a set of objects for which the class (for k-NN classification) or the object property value (for k-NN regression) is known. This can be thought of as the training set for the algorithm, though no explicit training step is required.
A peculiarity of the k-NN algorithm is that it is sensitive to the local structure of the data.