Differentiation is the process of computing the gradient (derivative) of an arbitrary function and can be used in many simple cases to minimise a function describing the loss (error) of our model.

Notebook of a forgetful coder

## Math Notes: A Short Intro To Differentiation

Differentiation is the process of computing the gradient (derivative) of an arbitrary function and can be used in many simple cases to minimise a function describing the loss (error) of our model.

## Math Notes: The Types and Uses of Variables

If it can take on different values, it’s a variable. They come in a variety of flavours and are extremely important in experimental design.

## Math Notes: The Cosine Rule and Dot Product

This is a generalisation of Pythagoras’ theorem to apply to all triangles rather than just right angled ones. The cosine rule reduces to Pythagoras’ Theorem as well as providing the mathematical basis behind the usefulness of the dot product for establishing the extent to which two vectors are going in the same direction.

## Math Notes: Euclidean Distance

Euclidean Distance is the ‘ordinary’ straight line distance between two points in Euclidean Space. It can be seen in action as the frustrating difference in distance between how far away something is (the straight line distance) and how far you have to go to get there (the rather disappointingly named distance travelled).

## Machine Learning Notes: K-NN

k-Nearest Neighbours is probably the simplest of the classification techniques, it works by looping through the training dataset, checking each point to see how close it is to the sample you are trying to classify. Once it’s gone through all of them it returns a classification based on an arbitrary number of points (k) so if k is 1 it returns the class of the nearest point to the one you’re trying to classify, for values of k greater than 1 however it returns the class that the majority of the points belong to, so if you have two points from class a and one from class b it will assign the new point to class a.