## SVM: Supervised Learning Algorithm

Now since we have already discussed linear regression and logistic regression algorithms in detail, it’s time to move on to Support Vector Machine (SVM).

SVM is another simple yet crucial algorithm that every machine learning expert should have in their armaments.

SVM is highly preferred by many as it produces significant accuracy with less computation power.

SVM can be used for both regression and classification tasks. But, it is widely used in classification objectives.

Quick heads up. I’d suggest you go through linear regression and logistic regression before this.

Done? Awesome! Let’s move on!

How does SVM work?

Let’s understand and visualize the basics of SVM using a simple example

Let’s imagine we have two tags: red and blue, and our data has two features: x and y. We want a classifier that, given a pair of (x,y) coordinates, outputs if it’s either red or blue. We plot our already labeled training data on a plane:

SVM takes these data points and outputs the hyperplane; please note that in two dimensions this hyperplane will just be a line that best separates the tags. This line is the ‘decision boundary‘: anything that falls to one side of it we will classify as blue, and anything that falls to the other as red.

However, what exactly is the best hyperplane? For SVM, it’s the one that maximizes the margins from both tags. In other words: the hyperplane (remember it’s a line in this case) whose distance to the nearest element of each tag is the largest.