Reasoning in high-dimensional spaces
Working with feature spaces of high dimensions requires special mental precautions, since our intuition used to deal with three-dimensional space starts to fail. For example, let's look at one peculiar property of n-dimensional spaces, known as an n-ball volume problem. N-ball is just a ball in n-dimensional Euclidean space. If we plot the volume of such n-ball (y axis) as a function of a number of dimensions (x axis), we'll see the following graph:
Note that at the beginning the volume rises, until it reaches its peak in five-dimensional space, and then starts decreasing. What does it mean for our models? Specifically, for KNN, it means that starting from five features, the more features you have the greater should be the radius of the sphere centered on the point you're trying to classify to cover KNN.
The counter-intuitive phenomena that arise in a high-dimensional space are colloquially known as the curse of dimensionality. This includes a wide range of phenomena that can't be observed in the three-dimensional space we used to deal with. Pedro Domingos, in his A Few Useful Things to Know about Machine Learning, provides some examples:
Speaking specifically of KNN, it treats all dimensions as equally important. This creates problems when some of the features are irrelevant, especially in high dimensions, because the noise introduced by these irrelevant features suppresses the signal comprised in the good features. In our example, we bypassed multidimensional problems by taking into account only the magnitude of each three-dimensional vector in our motion signals.