In order to effectively evaluate our models we need something to evaluate them with. To achieve this we split our data into three groups: Training, Validation and Evaluation…
Notebook of a forgetful coder
In order to effectively evaluate our models we need something to evaluate them with. To achieve this we split our data into three groups: Training, Validation and Evaluation…
High dimensional space is weird and counter intuitive, and the higher the number of dimensions the weirder it gets.
k-Nearest Neighbours is probably the simplest of the classification techniques, it works by looping through the training dataset, checking each point to see how close it is to the sample you are trying to classify. Once it’s gone through all of them it returns a classification based on an arbitrary number of points (k) so if k is 1 it returns the class of the nearest point to the one you’re trying to classify, for values of k greater than 1 however it returns the class that the majority of the points belong to, so if you have two points from class a and one from class b it will assign the new point to class a.