Data Science Python

What are different types of Distance Metrics ?

Understanding distances are crucial in developing machine learning models . Some of supervised learning and unsupervised learning algorithms are dependent on distances(Euclidean Distance, Manhattan Distance) , such as k-means clustering, k nearest neighbors.

Euclidean Distance

Euclidean distance is the most commonly used distance metric, and it measures the straight line distance between two points in a multi-dimensional space. It is simple to calculate and intuitive to interpret, making it a popular choice for many applications.

It corresponds to the L2-norm of a difference between vectors and vector spaces. The cosine similarity is proportional to the dot product of two vectors and inversely proportional to the product of their magnitudes.

Euclidean 1

So, the Euclidean Distance between these two points, A and B, will be:

Euclidean 2

Formula for Euclidean Distance

euclidean distance formula | distance metrics

We use this formula when we are dealing with 2 dimensions. We can generalize this for an n-dimensional space as:

euclidean distance formula | distance metrics

Where,

  • n = number of dimensions
  • pi, qi = data points

Most machine learning algorithms, including K-Means use this distance metric to measure the similarity between observations. Let’s say we have two points, as shown below:

However, other distance metrics such as Manhattan distance, Chebyshev distance, and Minkowski distance have their own unique advantages and disadvantages.

For example, Manhattan distance is useful when the dimensions in the data have different units of measurement, while Chebyshev distance is ideal for applications where the maximum difference between two dimensions is more important than the individual differences.

Manhattan Distance

Manhattan distance is also called L1 Distance. Sum of difference between coordinates across all the dimensions represent Manhattan Distance

For example we have 2 points in a 2D Plane A (p1,q1) and B(p2,q2). Below is Manhattan Distance is below:

For example, consider two points A(2, 3) and B(5, 7) in a two-dimensional grid. The Manhattan distance between them would be:

Manhattan distance = |5 – 2| + |7 – 3| = 3 + 4 = 7

Chebyshev Distance

Chebyshev Distance is similar to Manhattan distance used to find distance between two grid points in a grid system. But there’s one difference between this and Manhattan distance. It moves diagonally as well as horizontal and vertical directions.

Points 2

In the above figure, Chebyshev Distance between A and B can be represented as

Chebyshev Distance

In addition to these standard distance metrics, there are also specialized distance metrics such as Mahalanobis distance, which takes into account the covariance between variables. This is especially useful in applications where the dimensions are correlated.

Mahalanobis Distance

The Mahalanobis distance is a measure between a sample point and a distribution.

The Mahalanobis distance from a vector y to a distribution with mean μ and covariance Σ is

d2 = mahal(Y,X)

This distance represents how far y is from the mean in number of standard deviations.

mahal returns the squared Mahalanobis distance d2 from an observation in Y to the reference samples in X. In the mahal function, μ and Σ are the sample mean and covariance of the reference samples, respectively.

But that’s not all! There are also other important distance metrics used in machine learning, such as the Hamming distance, which is used to measure the difference between two strings of equal length.

The Haversine distance is used to calculate the distance between two points on a sphere, and the Cosine distance is a measure of similarity between two non-zero vectors of an inner product space.

Haversine Distance

The Haversine formula calculates the shortest distance between two points on a sphere using their latitudes and longitudes measured along the surface. It is important for use in navigation. The haversine can be expressed in trigonometric function as: 


haversine(\theta)=sin^2\Big(\frac{\theta}{2}\Big)


The haversine of the central angle (which is d/r) is calculated by the following formula:


\largehaversine\Big(\frac{d}{r}\Big)=haversine(\Phi_2-\Phi_1)+ cos(\Phi_1)cos(\Phi_2)haversine(\lambda_2-\lambda_1)


where r is the radius of the earth(6371 km), d is the distance between two points, \phi_1, \phi_2     is the latitude of the two points, and \lambda_1, \lambda_2     is the longitude of the two points respectively.
Solving d by applying the inverse haversine or by using the inverse sine function, we get: 


 d = r hav^{-1}(h) = 2r sin^{-1}(\sqrt{h})

or

d = 2r sin^{-1}\bigg(\sqrt{sin^2\Big(\frac{\Phi_2-\Phi_1}{2}\Big)+cos(\Phi_1)cos(\Phi_2)sin^2\Big(\frac{\lambda_2-\lambda_1}{2}\Big)}\ \bigg)

Understanding the strengths and weaknesses of each distance metric is crucial in selecting the appropriate metric for a given problem. By choosing the right distance metric, we can improve the accuracy and efficiency of our machine learning models.

So, whether you are a beginner or an experienced practitioner, taking the time to learn about these distances will undoubtedly improve your models. By understanding the various distance metrics available and their applications, we can select the most appropriate distance metric for our specific problem and improve the accuracy of our models.

Important Notice for college students

If you’re a college student and have skills in programming languages, Want to earn through blogging? Mail us at geekycomail@gmail.com

For more Programming related blogs Visit Us Geekycodes. Follow us on Instagram

1 comment

Leave a Reply

%d bloggers like this: